1
|
Miller-Kuhlmann R, Sasnal M, Gold CA, Nassar AK, Korndorffer JR, Van Schaik S, Marmor A, Williams S, Blankenburg R, Rassbach CE. Tips for developing a coaching program in medical education. MEDICAL EDUCATION ONLINE 2024; 29:2289262. [PMID: 38051864 PMCID: PMC10783821 DOI: 10.1080/10872981.2023.2289262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/26/2023] [Indexed: 12/07/2023]
Abstract
This article provides structure to developing, implementing, and evaluating a successful coaching program that effectively meets the needs of learners. We highlight the benefits of coaching in medical education and recognize that many educators desiring to build coaching programs seek resources to guide this process. We align 12 tips with Kern's Six Steps for Curriculum Development and integrate theoretical frameworks from the literature to inform the process. Our tips include defining the reasons a coaching program is needed, learning from existing programs and prior literature, conducting a needs assessment of key stakeholders, identifying and obtaining resources, developing program goals, objectives, and approach, identifying coaching tools, recruiting and training coaches, orienting learners, and evaluating program outcomes for continuous program improvement. These tips can serve as a framework for initial program development as well as iterative program improvement.
Collapse
Affiliation(s)
| | - Marzena Sasnal
- Center for Research on Education Outcomes, Stanford University, Palo Alto, USA
| | - Carl A. Gold
- Department of Neurology and Neurological Sciences, Stanford University, Palo Alto, USA
| | | | | | - Sandrijn Van Schaik
- Department of Pediatrics, University of California at San Francisco, San Francisco, USA
| | - Andrea Marmor
- Department of Pediatrics, University of California at San Francisco, San Francisco, USA
| | - Sarah Williams
- Department of Emergency Medicine, Stanford University, Palo Alto, USA
| | | | | |
Collapse
|
2
|
Alavi M, Biros E, Cleary M. Notes to Factor Analysis Techniques for Construct Validity. Can J Nurs Res 2024; 56:164-170. [PMID: 37801518 DOI: 10.1177/08445621231204296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023] Open
Abstract
This paper introduces and discusses factor analysis techniques for construct validity, including some suggestions for reporting using the evidence to support the construct validity from exploratory and confirmatory factor analysis techniques. Construct validity is a vital part of psychological testing and a prerequisite to every measurement instrument, including aptitude, achievement, and interests. Research, particularly in nursing and the health sciences, depends on reliable and valid measurements. Therefore, a growing emphasis is on assessing validity regarding the structure of test variables commonly estimated by factor analysis techniques. However, it is not always clear how to report the analysis and use it to support the construct validity. Both exploratory and confirmatory factor analysis techniques provide vital evidence to support the construct validity. However, these are not the only available evidence for construct validity, and the researcher should always consider other sources of evidence to develop and support the construct validity of their intended measures. In addition, the collection and presentation of this evidence are not limited to a time, but the validity of constructs is a continuous process that leads to validating the underlying theories from which constructs have emerged.
Collapse
Affiliation(s)
- Mousa Alavi
- Department of Mental Health Nursing, School of Nursing and Midwifery, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Erik Biros
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - Michelle Cleary
- School of Nursing, Midwifery & Social Sciences, Central Queensland University, Sydney, NSW, Australia
| |
Collapse
|
3
|
Najafi H, Hosseinnataj A, Esmailpour Moalem A, Ilali ES, Papi S. Cross-cultural adaptation and psychometric properties of the Persian version of the Geriatric Sleep Questionnaire (P-GSQ). Cranio 2024:1-11. [PMID: 38661332 DOI: 10.1080/08869634.2024.2345570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
OBJECTIVE This study aimed to validate the Geriatric Sleep Questionnaire (GSQ) for assessing subjective sleep quality among elderly individuals in Iran. METHODS The GSQ underwent evaluation for face and content validity. Participants were selected via convenience sampling from five healthcare centers. Sociodemographic variables, including gender, number of children, recreational activities, budget deficits, and family conflicts were analyzed. Confirmatory factor analysis was conducted to verify the results. Internal consistency was assessed using Cronbach's α, and test-retest reliability was evaluated using the intraclass correlation coefficient (ICC). RESULTS 200 older adults (mean age 66.8 years) completed the questionnaires. Face and content validity were confirmed by 30 experts (S-CVI/average = 0.96). The final model exhibited good fit indices (χ2/df = 2.89, CFI = 0.96). The scale demonstrated acceptable internal consistency (α = 0.81) and test-retest reliability (ICC = 0.98). CONCLUSION The Persian GSQ demonstrates high reliability and validity for assessing sleep quality in older adults, aiding research in this field.
Collapse
Affiliation(s)
- Hadi Najafi
- Department of Geriatric Health, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Abolfazl Hosseinnataj
- Department of Biostatistics and Epidemiology, School of Health, Mazandaran University of Medical Sciences, Sari, Iran
| | - Atefe Esmailpour Moalem
- Department of Geriatric Nursing, School of Nursing and Midwifery, Mazandaran University of Medical Sciences, Sari, Iran
| | - Ehteram Sadat Ilali
- Department of Geriatric Nursing, School of Nursing and Midwifery, Mazandaran University of Medical Sciences, Sari, Iran
| | - Shahab Papi
- Dental Research Center, Mazandaran University of Medical Sciences, Sari, Iran
- Department of Geriatric Health, School of Health, Mazandaran University of Medical Sciences, Sari, Iran
| |
Collapse
|
4
|
Ayub R, Yousuf N, Shabnam N, Ashraf MA, Afzal AS, Rauf A, Khan DH, Kiran F. Investigating the internal structure of multiple mini interviews-A perspective from Pakistan. PLoS One 2024; 19:e0301365. [PMID: 38603708 PMCID: PMC11008892 DOI: 10.1371/journal.pone.0301365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 03/12/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND Healthcare professionals require many personal attributes in addition to cognitive abilities and psychomotor skills for competent practice. Multiple Mini- Interviews are being employed globally to assess personality attributes of candidates for selection in health professions education at all level of entry; these attributes are namely, communication skills, critical thinking, honesty, responsibility, health advocacy, empathy and sanctity of life. Considering the high stakes involved for students, faculty, institutions and the society, rigorous quality assurance mechanisms similar to those used for student assessment must be employed for student selection, throughout the continuum of medical education. It is a difficult undertaking as these psychological constructs are difficult to define and measure. Though considered to yield reliable and valid scores, studies providing multiple evidences of internal structure especially dimensionality of Multiple Mini-Interviews are sparse giving rise to questions if they are measuring a single or multiple constructs and even if they are measuring what they are purported to be measuring. OBJECTIVE The main objective is to provide statistical support of the multi-dimensional nature of our Multiple Mini Interviews, hypothesized a-priori, through CFA. Another objective is to provide multiple evidences for the internal structure. Our study highlights the link between content and internal structure evidences of the constructs, thus establishing that our Multiple Mini Interviews measure what they were intended to measure. METHOD After securing permission from the Institutional review board, an a-priori seven factor-model was hypothesized based on the attributes considered most essential for the graduating student of the institution. After operationally defining the attributes through extensive literature search, scenarios were constructed to assess them. A 5-point rating scale was used to rate each item on the station. A total 259 students participated in the multiple mini interviews over a period of three days. A training workshop had been arranged for the participating faculty. RESULTS The reliability coefficient using Cronbach's alpha were calculated (range from 0.73 to 0.94), Standard Error of Measurement (ranged from 0.80 to1.64), and item to station-total correlation ranged from 0.43-0.50 to 0.75-0.83. Inter-station correlation was also determined. Confirmatory factor analysis endorsed the results of Exploratory factor analysis in the study revealing a seven model fit with multiple indices of Goodness-of-fit statistics such as Root mean square error of approximation (RMSEA) value 0.05, Standardized root mean square residual (SRMR) value with less than 0.08. All these indices showed that model fit is good. The Confirmatory factor analysis confirmed the multi-dimensional nature of our MMIs and also confirmed that our stations measured the attributes that they were supposed to measure. CONCLUSION This study adds to the validity evidence of Multiple Mini-Interviews, in selection of candidates, with required personality traits for healthcare profession. It provides the evidence for the multi-dimensional structure of Multiple Mini interviews administered with multiple evidences for its internal structure and demonstrates the independence of different constructs being measured.
Collapse
Affiliation(s)
- Rukhsana Ayub
- Department of Health Professions Education, National University of Medical Science, Rawalpindi, Pakistan
| | - Naveed Yousuf
- Department for Educational Development, The Aga Khan University, Karachi, Pakistan
| | - Nadia Shabnam
- Department of Health Professions Education, National University of Medical Science, Rawalpindi, Pakistan
| | | | - Azam S. Afzal
- Department of Community Health Sciences & Department for Educational Development, The Aga Khan University, Karachi, Pakistan
| | - Ayesha Rauf
- Department of Health Professions Education, National University of Medical Science, Rawalpindi, Pakistan
| | - Danish Hassan Khan
- Clinical Project Manager, Tiger Med Consulting Pakistan Ltd, Punjab, Pakistan
| | - Faiza Kiran
- Department of Health Professions Education, National University of Medical Science, Rawalpindi, Pakistan
| |
Collapse
|
5
|
Mohammadi F, Masoumi SZ, Khazaei S, Hosseiny SMM. Psychometrics assessment of ethical decision-making around end-of-life care scale for adolescents in the final stage of life. Front Pediatr 2024; 11:1266929. [PMID: 38318315 PMCID: PMC10839055 DOI: 10.3389/fped.2023.1266929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 12/14/2023] [Indexed: 02/07/2024] Open
Abstract
Introduction Healthcare professionals have a critical role in ethical decision-making around end-of-life care. Properly evaluating the ethical decision-making of health care professionals in end-of-life care requires reliable, tailored, and comprehensive assessments. The current study aimed to translate and assess psychometrically a Persian version of the ethical decision making in end-of-life care scale for Iranian adolescents in the final stages of life. Methods The present study investigates the methodology and multicenter research. 310 healthcare professionals who treat/care for adolescents at the end of life were selected from 7 cities in Iran. The original version of the end-of-life care decision-making scale was translated into Persian using the forward-backward translation method, and its psychometric properties were evaluated using COSMIN criteria. Results Exploratory factor analysis revealed that the factor loadings of the items ranged from 0.68 to 0.89, all of which were statistically significant. Furthermore, three factors had eigenvalues greater than 1, accounting for 81.64% of the total variance. Confirmatory factor analysis indicated a proper goodness of fit in the hypothesized factor structure. The internal consistency reliability of the tool was assessed in terms of its homogeneity, yielding a Cronbach's alpha coefficient of 0.93. Conclusion The Persian version of the End-of-Life Care Decision-Making Scale demonstrates satisfactory validity and reliability among healthcare professionals working with adolescents in the final stages of life. Therefore, nursing managers can utilize this tool to measure and evaluate ethical decision-making in end-of-life care for adolescents in the final stages of life and identify the most appropriate strategies, including educational interventions, to improve ethical decision-making in end-of-life care if necessary.
Collapse
Affiliation(s)
- Fateme Mohammadi
- School of Nursing and Midwifery, Chronic Diseases(Home Care) Research Center and Autism Spectrum Disorders Research Center, Department of Nursing, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Seyedeh Zahra Masoumi
- Department of Midwifery, School of Nursing and Midwifery, Mother and Child Care Research Center, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Salman Khazaei
- Health Sciences Research Center, Health Sciences & Technology Research Institute, Hamadan University of Medical Science, Hamadan, Iran
| | | |
Collapse
|
6
|
Di Luigi G, Claréus B, Mejias Nihlén T, Malmquist A, Wurm M, Lundberg T. Psychometric Exploration of the Swedish Translation of the Sexual Orientation Microaggressions Scale (SOMS), and a Commentary on the Validity of the Construct of Microaggressions. JOURNAL OF HOMOSEXUALITY 2023:1-24. [PMID: 38019554 DOI: 10.1080/00918369.2023.2284809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
The aim of the present study was to assess the psychometric properties of a Swedish translation of the Sexual Orientation Microaggressions Scale (SOMS) in a convenience sample of 267 Swedish LGB+ people (Mean age = 36.41). Testing suggested some strengths in terms of factor structure and 2-week test-retest reliability (ICC > .79). Also, internal consistency (α = .80-.91) and convergent validity were supported for most subscales. However, the Assumption of Deviance subscale was associated with low response variability and internal consistency (α = .35), and the correlational pattern between the Environmental Microaggressions subscale and mental health variables diverged from the overall trend. Furthermore, measurement invariance between homo- and bisexual participants was not supported for most subscales, and although microaggressions would be theoretically irrelevant to a small comparison sample of heterosexual people (N = 76, Mean age = 40.43), metric invariance of the Environmental Microaggressions subscale was supported in comparison to LGB+ people. We argue that these limitations suggest a restricted applicability of the SOMS in a Swedish context, and this has consequences for the definition and operationalization of the construct of microaggressions as a whole. Therefore, more research on the latent properties of microaggressions in Swedish as well as in other contexts is required.
Collapse
Affiliation(s)
- Guendalina Di Luigi
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | | | - Theodor Mejias Nihlén
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Anna Malmquist
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Matilda Wurm
- School of Behavioural, Social, and Legal Sciences, Örebro University, Örebro, Sweden
| | - Tove Lundberg
- Department of Psychology, Lund University, Lund, Sweden
| |
Collapse
|
7
|
Translation and psychometric assessment of a Persian version of medication safety competence scale (MSCS) for clinical nurses. Sci Rep 2023; 13:2247. [PMID: 36755086 PMCID: PMC9908976 DOI: 10.1038/s41598-023-29399-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 02/03/2023] [Indexed: 02/10/2023] Open
Abstract
Nurses play a key role in medication safety and, by extension, patient safety. Evaluation of medication safety competence in nurses requires valid, specific, and comprehensive instruments. The present study was conducted to translate and psychometric assessment a Persian version of medication safety competence scale (MSCS) for clinical nurses in Iran. This is a cross-sectional and multi-centric work of research with a methodological design. A total of 1080 clinical nurses were selected from 5 cities located in Iran. The original version of the MSCS was translated into Persian and the psychometric properties of MSCS were assessed using COSMIN criteria. The exploratory factor analysis (EFA) showed that the factor loading of the 36 items was between 0.72-0.87, all of which were significant. The confirmatory factor analysis (CFA) fitted the data well (χ2/df = 7, RMSEA = 0.01, CFI = 0.96, NFI = 0.95, and TLI = 0.97). The reliability of the instrument was assessed in terms of its internal homogeneity where the Cronbach's alpha of the whole instrument was found to be 0.96. The Persian version of MSCS for nurses possesses satisfactory validity and reliability. Thus, nurse managers can use this instrument to measure medication safety competence in nurses.
Collapse
|
8
|
Development and Validation of a Health Behaviour Scale: Exploratory Factor Analysis on Data from a Multicentre Study in Female Primary Care Patients. Behav Sci (Basel) 2022; 12:bs12100378. [PMID: 36285947 PMCID: PMC9598194 DOI: 10.3390/bs12100378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 09/09/2022] [Accepted: 09/29/2022] [Indexed: 11/05/2022] Open
Abstract
Health behaviours are the most important proximal determinants of health that can be either promoting or detrimental to the health of individuals. To assess and compare health behaviours in different socioeconomic groups within the population, a comprehensive, valid, reliable, and culturally appropriate measure is needed. This study aimed to develop a health behaviour questionnaire and validate it in a sample of female patients over 45 years of age with cardiovascular disease (CVD). The development procedure encompassed the following stages: literature search and item generation, content validity testing (focus group and expert evaluation), and field testing. A preliminary 38-item Health Behaviour Scale (HBS) was developed and tested in a group of 487 female primary care patients over 45 years of age. An exploratory factor analysis (EFA) yielded a four-factor structure. Factors jointly accounted for 47% of the variance observed. The results confirmed very good internal consistency of the questionnaire. The Cronbach's alpha and McDonald's omega coefficients for the entire scale were 0.82 and 0.84, respectively. The factor and item structure of the final 16-item HBS reflects the specificity of the studied sample. This measure can be a useful tool for primary care practitioners and public health researchers by helping them to develop interventions and strategies to reinforce health-promoting behaviours.
Collapse
|
9
|
Łopińska M, Gielecki JS, Żurada A. Flipped spotters learning model: An innovative student activity-based strategy. A preparation tool for anatomy practical examinations in medical education. ANATOMICAL SCIENCES EDUCATION 2022; 15:886-897. [PMID: 34398534 DOI: 10.1002/ase.2132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 08/06/2021] [Accepted: 08/13/2021] [Indexed: 06/13/2023]
Abstract
The flipped spotters learning model is a modern student activity-based and learner-centered method in medical education. The aim of the study was to determine if the flipped spotters learning model improves students' learning. Participants were 1214 medical students of Polish (PD) and English (ED) divisions between 2013 and 2019 academic years at the University of Warmia and Mazury in Olsztyn, Poland. They were divided into a traditional group (control group) and a flipped spotters learning group (treatment group). Each flipped spotters learning group was asked to label anatomical structures on various specimens according to the structures name list prepared by the teacher on the multiple stations. The flipped spotters learning group leaders were instructed to take pictures with the appropriately marked structures on each of the human body prosections. After completion of the class, each flipped spotters team received photos for evaluation. In the flipped spotters learning model, the students strengthened their skills and knowledge by matching specimens independently as a form of practical laboratory activities. Students' performance in gross anatomy practical examinations between the group utilizing the flipped spotters learning model, and the group with the traditional teaching model was compared. Students participating in the treatment group achieved, on average 9.9 percentage points higher among PD students, and 13.0 percentage points higher among ED students than the control group in all nine practical examinations (the effect size ranging from 0.47 to 0.95). The results suggest the positive impact of flipped spotters model on improving student's performance in the practical examinations.
Collapse
Affiliation(s)
- Marcelina Łopińska
- Department of Anatomy, Collegium Medicum, Faculty of Medicine, University of Warmia and Mazury, Olsztyn, Poland
| | - Jerzy Stanisław Gielecki
- Department of Anatomy, Collegium Medicum, Faculty of Medicine, University of Warmia and Mazury, Olsztyn, Poland
| | - Anna Żurada
- Department of Radiology, Collegium Medicum, Faculty of Medicine, University of Warmia. and Mazury, Olsztyn, Poland
| |
Collapse
|
10
|
Ghardallou M, Zedini C, Sahli J, Ajmi T, Khairi H, Mtiraoui A. Psychometric properties of a French version of the Jefferson Scale of Empathy. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2022; 13:205-214. [PMID: 35920177 PMCID: PMC9904998 DOI: 10.5116/ijme.62d2.8497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 07/16/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVE To assess the reliability and construct validity of a French version of the Jefferson Scale of Empathy-Students. METHODS A cross-sectional study was performed among undergraduate medical students in Tunisia. A total of 833 students completed a French version of the JSE-S using convenience sampling. To identify the internal consistency aspect of the reliability, Cronbach's alpha coefficient was computed. Moreover, to assess the construct validity, the sample was randomly divided into two groups. Data from the first group (n=415) were subjected to exploratory factor analysis (EFA), with principal axing factoring (PAF) and oblimin rotation, to re-examine the underlying factor structure of the scale. Data from the second group (n=419) were used for confirmatory factor analysis (CFA) to confirm its latent variable structure. Some goodness-of-fit indices were used to assess the hypothesized model. Gender groups were compared using a t-test to check the known-group validity. RESULTS Reliability analysis reported an acceptable level of internal consistency, with an overall Cronbach's alpha of 0.78 (95% CI [0.75,0.80]). EFA identified a two-factor structure, accounting for 27.4% of the total variance. The two-factor model produced good fit indices when item correlated errors were considered (χ2/df = 1.95, GFI = 0.92, CFI = 0.90, PCFI = 0.79, PGFI = 0.73 and RMSEA = 0.04). Female students had a statistically significant higher empathy scores than male students (t (830) = - 4.16, p < .001). CONCLUSIONS The findings support the construct validity and reliability of a French version of the JSE for medical students. This instrument appears to be useful for investigating empathy among French-speaking populations.
Collapse
Affiliation(s)
- Mariem Ghardallou
- University of Sousse, Faculty of Medicine of Sousse, Department of Community Medicine, Research Laboratory LR12ES03, Tunisia
| | - Chekib Zedini
- University of Sousse, Faculty of Medicine of Sousse, Department of Community Medicine, Research Laboratory LR12ES03, Tunisia
| | - Jihene Sahli
- University of Sousse, Faculty of Medicine of Sousse, Department of Community Medicine, Research Laboratory LR12ES03, Tunisia
| | - Thouraya Ajmi
- University of Sousse, Faculty of Medicine of Sousse, Department of Community Medicine, Research Laboratory LR12ES03, Tunisia
| | - Hedi Khairi
- University of Sousse, Faculty of Medicine of Sousse, Research Laboratory LR12ES03, Tunisia
| | - Ali Mtiraoui
- University of Sousse, Faculty of Medicine of Sousse, Department of Community Medicine, Research Laboratory LR12ES03, Tunisia
| |
Collapse
|
11
|
Carvalho-Alves MO, Petrilli-Mazon VA, Brunoni AR, Malbergier A, Fukuti P, Polanczyk GV, Miguel EC, Corchs F, Wang YP. Dimensions of emotional distress among Brazilian workers in a COVID-19 reference hospital: A factor analytical study. World J Psychiatry 2022; 12:843-859. [PMID: 35978972 PMCID: PMC9258270 DOI: 10.5498/wjp.v12.i6.843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/22/2022] [Accepted: 05/14/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) pandemic is an unprecedented challenge for public health and has caused the loss of millions of lives worldwide. Hospital workers play a key role in averting the collapse of the health system, but the mental health of many has deteriorated during the pandemic. Few studies have been devoted to identifying the needs of workers on frontline duty.
AIM To investigate dimensions of common emotional symptoms and associated predictors among Brazilian workers in a COVID-19 reference hospital.
METHODS This is an observational study of the mental health of professionals in a COVID-19 hospital in the city of São Paulo. We invited all hospital employees to respond to an online survey between July and August 2020, during the first peak of the pandemic. Data of 1000 participants who completed the survey were analyzed (83.9% were women and 34.3% were aged 30 to 40). Hospital workers self-reported the presence of symptoms of depression, anxiety, trauma-related stress, and burnout through the Patient Health Questionnaire-9, the Generalized Anxiety Disorder-7, the Impact of Event Scale-Revised and the Mini-Z Burnout Assessment respectively. Responses were assembled and subjected to exploratory factor analysis to reveal workers’ core emotional distress. Multiple linear regression models were subsequently carried out to estimate the likelihood of dimensions of distress using questions on personal motivation, threatening events, and institutional support.
RESULTS Around one in three participants in our sample scored above the threshold of depression, anxiety, post-traumatic stress disorder, and burnout. The factor analysis revealed a three-factor structure that explained 58% of the total data variance. Core distressing emotional domains were avoidance and re-experience, depression-anxiety, and sleep changes. Regression analysis revealed that institutional support was a significant protective factor for each of these dimensions (β range = -0.41 to -0.20, P < 0.001). However, participants’ personal motivation to work in healthcare service was not associated with these emotional domains. Moreover, the likelihood of presenting the avoidance and re-experience dimension was associated with having a family member or close friend be hospitalized or die due to COVID-19 and having faced an ethical conflict.
CONCLUSION Distressing emotional domains among hospital workers were avoidance and re-experience, depression and anxiety, and sleep changes. Improving working conditions through institutional support could protect hospital workers' mental health during devastating public health crises.
Collapse
Affiliation(s)
- Marcos O Carvalho-Alves
- Department of Psychiatry, School of Medicine, University of Sao Paulo, Sao Paulo 05403-010, Brazil
- Program in Neuroscience and Behavior, Department of Experimental Psychology, Institute of Psychology, University of Sao Paulo, Sao Paulo 01060-970, Brazil
| | - Vitor A Petrilli-Mazon
- Department of Psychiatry, School of Medicine, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| | - Andre R Brunoni
- Department of Psychiatry, School of Medicine, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| | - Andre Malbergier
- Department of Psychiatry, School of Medicine, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| | - Pedro Fukuti
- Department of Psychiatry, School of Medicine, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| | - Guilherme V Polanczyk
- Department of Psychiatry, School of Medicine, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| | - Euripedes C Miguel
- Department and Institute of Psychiatry, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| | - Felipe Corchs
- Program in Neuroscience and Behavior, Department of Experimental Psychology, Institute of Psychology, University of Sao Paulo, Sao Paulo 01060-970, Brazil
- Department of Psychiatry, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| | - Yuan-Pang Wang
- Department of Psychiatry, School of Medicine, University of Sao Paulo, Sao Paulo 05403-010, Brazil
| |
Collapse
|
12
|
Keegan RJ, Flood A, Niyonsenga T, Welvaert M, Rattray B, Sarkar M, Melberzs L, Crone D. Development and Initial Validation of an Acute Readiness Monitoring Scale in Military Personnel. Front Psychol 2021; 12:738609. [PMID: 34867619 PMCID: PMC8636321 DOI: 10.3389/fpsyg.2021.738609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 10/15/2021] [Indexed: 11/13/2022] Open
Abstract
Personnel in many professions must remain "ready" to perform diverse activities. Managing individual and collective capability is a common concern for leadership and decision makers. Typical existing approaches for monitoring readiness involve keeping detailed records of training, health and equipment maintenance, or - less commonly - data from wearable devices that can be difficult to interpret as well as raising privacy concerns. A widely applicable, simple psychometric measure of perceived readiness would be invaluable in generating rapid evaluations of current capability directly from personnel. To develop this measure, we conducted exploratory factor analysis and confirmatory factor analysis with a sample of 770 Australian military personnel. The 32-item Acute Readiness Monitoring Scale (ARMS) demonstrated good model fit, and comprised nine factors: overall readiness; physical readiness; physical fatigue; cognitive readiness; cognitive fatigue; threat-challenge (i.e., emotional/coping) readiness; skills-and-training readiness; group-team readiness, and equipment readiness. Readiness factors were negatively correlated with recent stress, current negative affect and distress, and positively correlated with resilience, wellbeing, current positive affect and a supervisor's rating of solider readiness. The development of the ARMS facilitates a range of new research opportunities: enabling quick, simple and easily interpreted assessment of individual and group readiness.
Collapse
Affiliation(s)
- Richard James Keegan
- Research Institute for Sport and Exercise, Faculty of Health, University of Canberra, Canberra, ACT, Australia
- Faculty of Health, University of Canberra, Canberra, ACT, Australia
| | - Andrew Flood
- Research Institute for Sport and Exercise, Faculty of Health, University of Canberra, Canberra, ACT, Australia
- Faculty of Health, University of Canberra, Canberra, ACT, Australia
| | - Theo Niyonsenga
- Faculty of Health, University of Canberra, Canberra, ACT, Australia
- Health Research Institute, University of Canberra, Canberra, ACT, Australia
| | | | - Ben Rattray
- Research Institute for Sport and Exercise, Faculty of Health, University of Canberra, Canberra, ACT, Australia
- Faculty of Health, University of Canberra, Canberra, ACT, Australia
| | - Mustafa Sarkar
- School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
| | | | - David Crone
- Department of Defence, Australian Government, Edinburgh, SA, Australia
| |
Collapse
|
13
|
Kusi-Appiah E, Karanikola M, Pant U, Meghani S, Kennedy M, Papathanassoglou E. Tools for assessment of acute psychological distress in critical illness: A scoping review. Aust Crit Care 2021; 34:460-472. [PMID: 33648818 DOI: 10.1016/j.aucc.2020.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Revised: 11/23/2020] [Accepted: 12/13/2020] [Indexed: 10/22/2022] Open
Abstract
OBJECTIVES Patients' experience of psychological distress in the intensive care unit (ICU) is associated with adverse effects, reduced satisfaction, and delayed physical and psychological recovery. There are no specific guidelines for the assessment and management of acute psychological distress during hospitalisation in the ICU. We reviewed existing tools for the assessment of acute psychological distress in ICU patients, examined evidence on their metric properties, and identified potential gaps and methodological considerations. METHOD A scoping review based on literature searches (Cumulative Index to Nursing and Allied Health Literature, Medical Literature Analysis and Retrieval System Online, Excerpta Medica Database, PsycINFO, Scopus, Health and Psychosocial Instruments, Dissertations and Theses Global, and Google Scholar) and predefined eligibility criteria was conducted as per current scoping review guidelines. FINDINGS Overall, 14 assessment tools were identified having been developed in diverse ICU settings. The identified tools assess mainly anxiety and depressive symptoms and ICU stressors, and investigators have reported various validity and reliability metrics. It was unclear whether available tools can be used in specific groups, such as noncommunicative patients and patients with delirium, brain trauma, stroke, sedation, and cognitive impairments. CONCLUSION Available tools have methodological limitations worth considering in future investigations. Given the high prevalence of psychiatric morbidity in ICU survivors, rigorously exploring the metric integrity of available tools used for anxiety, depressive, and psychological distress symptom assessment in the vulnerable ICU population is a practice and research priority. RELEVANCE TO CLINICAL PRACTICE These results have implications for the selection and implementation of psychological distress assessment methods as a means for promoting meaningful patient-centred clinical outcomes and humanising ICU care experiences.
Collapse
Affiliation(s)
- Elizabeth Kusi-Appiah
- Faculty of Nursing, University of Alberta, Edmonton Clinic Health Academy, Edmonton, AB, T6G 1C9, Canada.
| | - Maria Karanikola
- Cyprus University of Technology, Department of Nursing, 15 Vragadinou str-Limassol, 3041, Cyprus.
| | - Usha Pant
- Faculty of Nursing, University of Alberta, Edmonton Clinic Health Academy, Edmonton, AB, T6G 1C9, Canada.
| | - Shaista Meghani
- Faculty of Nursing, University of Alberta, Edmonton Clinic Health Academy, Edmonton, AB, T6G 1C9, Canada.
| | - Megan Kennedy
- John W. Scott Health Sciences Librarian, University of Alberta Library, 2K3.28 Walter C. Mackenzie Health Sciences Centre, Edmonton, AB, T6G 2R7, Canada.
| | - Elizabeth Papathanassoglou
- Faculty of Nursing, University of Alberta, Edmonton Clinic Health Academy, Edmonton, AB, T6G 1C9, Canada.
| |
Collapse
|
14
|
Tavakol M, Wetzel A. Factor Analysis: a means for theory and instrument development in support of construct validity. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2020; 11:245-247. [PMID: 33170146 PMCID: PMC7883798 DOI: 10.5116/ijme.5f96.0f4a] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 10/25/2020] [Indexed: 06/11/2023]
Affiliation(s)
- Mohsen Tavakol
- School of Medicine, Medical Education Centre, the University of Nottingham, UK
| | - Angela Wetzel
- School of Education, Virginia Commonwealth University, USA
| |
Collapse
|
15
|
Lau S, Pek K, Chew J, Lim JP, Ismail NH, Ding YY, Cesari M, Lim WS. The Simplified Nutritional Appetite Questionnaire (SNAQ) as a Screening Tool for Risk of Malnutrition: Optimal Cutoff, Factor Structure, and Validation in Healthy Community-Dwelling Older Adults. Nutrients 2020; 12:nu12092885. [PMID: 32967354 PMCID: PMC7551805 DOI: 10.3390/nu12092885] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 09/17/2020] [Accepted: 09/18/2020] [Indexed: 12/12/2022] Open
Abstract
Malnutrition is an independent marker of adverse outcomes in older adults. While the Simplified Nutritional Appetite Questionnaire (SNAQ) for anorexia has been validated as a nutritional screening tool, its optimal cutoff and validity in healthy older adults is unclear. This study aims to determine the optimal cutoff for SNAQ in healthy community-dwelling older adults, and to examine its factor structure and validity. We studied 230 community-dwelling older adults (mean age 67.2 years) who were nonfrail (defined by Fatigue, Resistance, Ambulation, Illnesses & Loss (FRAIL) criteria). When compared against the risk of malnutrition using the Mini Nutritional Assessment (MNA), the optimal cutoff for SNAQ was ≤15 (area under receiver operating characteristic (ROC) curve: 0.706, sensitivity: 69.2%, specificity: 61.3%). Using exploratory factor analysis, we found a two-factor structure (Factor 1: Appetite Perception; Factor 2: Satiety and Intake) which accounted for 61.5% variance. SNAQ showed good convergent, discriminant and concurrent validity. In logistic regression adjusted for age, gender, education and MNA, SNAQ ≤15 was significantly associated with social frailty, unlike SNAQ ≤4 (odds ratio (OR) 1.99, p = 0.025 vs. OR 1.05, p = 0.890). Our study validates a higher cutoff of ≤15 to increase sensitivity of SNAQ for anorexia detection as a marker of malnutrition risk in healthy community-dwelling older adults, and explicates a novel two-factor structure which warrants further research.
Collapse
Affiliation(s)
- Sabrina Lau
- Department of Geriatric Medicine, Tan Tock Seng Hospital, Singapore 308433, Singapore; (J.C.); (J.P.L.); (Y.Y.D.); (W.S.L.)
- Institute of Geriatrics and Active Ageing, Tan Tock Seng Hospital, Singapore 308433, Singapore; (K.P.); (N.H.I.)
- Correspondence: ; Tel.: +65-6359-6474
| | - Kalene Pek
- Institute of Geriatrics and Active Ageing, Tan Tock Seng Hospital, Singapore 308433, Singapore; (K.P.); (N.H.I.)
| | - Justin Chew
- Department of Geriatric Medicine, Tan Tock Seng Hospital, Singapore 308433, Singapore; (J.C.); (J.P.L.); (Y.Y.D.); (W.S.L.)
- Institute of Geriatrics and Active Ageing, Tan Tock Seng Hospital, Singapore 308433, Singapore; (K.P.); (N.H.I.)
| | - Jun Pei Lim
- Department of Geriatric Medicine, Tan Tock Seng Hospital, Singapore 308433, Singapore; (J.C.); (J.P.L.); (Y.Y.D.); (W.S.L.)
- Institute of Geriatrics and Active Ageing, Tan Tock Seng Hospital, Singapore 308433, Singapore; (K.P.); (N.H.I.)
| | - Noor Hafizah Ismail
- Institute of Geriatrics and Active Ageing, Tan Tock Seng Hospital, Singapore 308433, Singapore; (K.P.); (N.H.I.)
- Department of Continuing and Community Care, Tan Tock Seng Hospital, Singapore 308433, Singapore
| | - Yew Yoong Ding
- Department of Geriatric Medicine, Tan Tock Seng Hospital, Singapore 308433, Singapore; (J.C.); (J.P.L.); (Y.Y.D.); (W.S.L.)
- Institute of Geriatrics and Active Ageing, Tan Tock Seng Hospital, Singapore 308433, Singapore; (K.P.); (N.H.I.)
| | - Matteo Cesari
- Department of Clinical Sciences and Community Health, University of Milan, 20122 Milan, Italy;
- Geriatric Unit, IRCCS Istituti Clinici Scientifici Maugeri, 20122 Milan, Italy
| | - Wee Shiong Lim
- Department of Geriatric Medicine, Tan Tock Seng Hospital, Singapore 308433, Singapore; (J.C.); (J.P.L.); (Y.Y.D.); (W.S.L.)
- Institute of Geriatrics and Active Ageing, Tan Tock Seng Hospital, Singapore 308433, Singapore; (K.P.); (N.H.I.)
| |
Collapse
|
16
|
Social Frailty Is Independently Associated with Mood, Nutrition, Physical Performance, and Physical Activity: Insights from a Theory-Guided Approach. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17124239. [PMID: 32545853 PMCID: PMC7345462 DOI: 10.3390/ijerph17124239] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 06/06/2020] [Accepted: 06/11/2020] [Indexed: 01/21/2023]
Abstract
Notwithstanding the increasing body of evidence that links social determinants to health outcomes, social frailty is arguably the least explored among the various dimensions of frailty. Using available items from previous studies to derive a social frailty scale as guided by the Bunt social frailty theoretical framework, we aimed to examine the association of social frailty, independently of physical frailty, with salient outcomes of mood, nutrition, physical performance, physical activity, and life–space mobility. We studied 229 community-dwelling older adults (mean age 67.22 years; 72.6% females) who were non-frail (defined by the FRAIL criteria). Using exploratory factor analysis, the resultant 8-item Social Frailty Scale (SFS-8) yielded a three-factor structure comprising social resources, social activities and financial resource, and social need fulfilment (score range: 0–8 points). Social non-frailty (SNF), social pre-frailty (SPF), and social frailty (SF) were defined based on optimal cutoffs, with corresponding prevalence of 63.8%, 28.8%, and 7.4%, respectively. In logistic regression adjusted for significant covariates and physical frailty (Modified Fried criteria), there is an association of SPF with poor physical performance and low physical activity (odds ratio, OR range: 3.10 to 6.22), and SF with depressive symptoms, malnutrition risk, poor physical performance, and low physical activity (OR range: 3.58 to 13.97) compared to SNF. There was no significant association of SPF or SF with life–space mobility. In summary, through a theory-guided approach, our study demonstrates the independent association of social frailty with a comprehensive range of intermediary health outcomes in more robust older adults. A holistic preventative approach to frailty should include upstream interventions that target social frailty to address social gradient and inequalities.
Collapse
|
17
|
McKinney M, Smith KE, Dong KA, Babenko O, Ross S, Kelly MA, Salvalaggio G. Development of the Inner City attitudinal assessment tool (ICAAT) for learners across Health care professions. BMC Health Serv Res 2020; 20:174. [PMID: 32143705 PMCID: PMC7059309 DOI: 10.1186/s12913-020-5000-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 02/14/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Many health professions learners report feeling uncomfortable and underprepared for professional interactions with inner city populations. These learners may hold preconceptions which affect therapeutic relationships and provision of care. Few tools exist to measure learner attitudes towards these populations. This article describes the development and validity evidence behind a new tool measuring health professions learner attitudes toward inner city populations. METHODS Tool development consisted of four phases: 1) Item identification and generation informed by a scoping review of the literature; 2) Item refinement involving a two stage modified Delphi process with a national multidisciplinary team (n = 8), followed by evaluation of readability and response process validity with a focus group of medical and nursing students (n = 13); 3) Pilot testing with a cohort of medical and nursing students; and 4) Analysis of psychometric properties through factor analysis and reliability. RESULTS A 36-item online version of the Inner City Attitudinal Assessment Tool (ICAAT) was completed by 214 of 1452 undergraduate students (67.7% from medicine; 32.3% from nursing; response rate 15%). The resulting tool consists of 24 items within a three-factor model - affective, behavioural, and cognitive. Reliability (internal consistency) values using Cronbach alpha were 0.87, 0.82, and 0.82 respectively. The reliability of the whole 24-item ICAAT was 0.90. CONCLUSIONS The Inner City Attitudinal Assessment Tool (ICAAT) is a novel tool with evidence to support its use in assessing health care learners' attitudes towards caring for inner city populations. This tool has potential to help guide curricula in inner city health.
Collapse
Affiliation(s)
- Mark McKinney
- Inner City Health and Wellness Program, Edmonton, AB Canada
- Department of Emergency Medicine, University of Ottawa, Ottawa, ON Canada
| | - Katherine E. Smith
- Alberta Health Services, Edmonton, AB Canada
- Department of Emergency Medicine, University of Alberta, Edmonton, AB Canada
| | - Kathryn A. Dong
- Inner City Health and Wellness Program, Edmonton, AB Canada
- Alberta Health Services, Edmonton, AB Canada
- Department of Emergency Medicine, University of Alberta, Edmonton, AB Canada
| | - Oksana Babenko
- Department of Family Medicine, University of Alberta, Edmonton, AB Canada
| | - Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, AB Canada
| | - Martina A. Kelly
- Department of Family Medicine, University of Calgary Cumming School of Medicine, Calgary, AB Canada
| | - Ginetta Salvalaggio
- Inner City Health and Wellness Program, Edmonton, AB Canada
- Department of Family Medicine, University of Alberta, Edmonton, AB Canada
- Department of Family Medicine, University of Alberta Faculty of Medicine & Dentistry, 610 University Terrace, Edmonton, AB T6G 2T4 Canada
| |
Collapse
|
18
|
Rauvola RS, Briggs EP, Hinyard LJ. Nomology, validity, and interprofessional research: The missing link(s). J Interprof Care 2020; 34:545-556. [DOI: 10.1080/13561820.2020.1712333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Rachel S. Rauvola
- Center for Interprofessional Education and Research, Saint Louis University, St. Louis, MO, USA
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Erick P. Briggs
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Leslie J. Hinyard
- Center for Interprofessional Education and Research & Center for Health Outcomes Research, Saint Louis University, St. Louis, MO, USA
| |
Collapse
|
19
|
Fielding A, Mulquiney K, Canalese R, Tapley A, Holliday E, Ball J, Klein L, Magin P. A general practice workplace-based assessment instrument: Content and construct validity. MEDICAL TEACHER 2020; 42:204-212. [PMID: 31597048 DOI: 10.1080/0142159x.2019.1670336] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Introduction: Relatively few general practice (GP) workplace-based assessment instruments have been psychometrically evaluated. This study aims to establish the content validity and internal consistency of the General Practice Registrar Competency Assessment Grid (GPR-CAG).Methods: The GPR-CAG was constructed as a formative assessment instrument for Australian GP registrars (trainees). GPR-CAG items were determined by an iterative literature review, expert opinion and pilot-testing process. Validation data were collected, between 2014 and 2016, during routine clinical teaching visits within registrars' first two general practice training terms (GPT1 and GPT2) for registrars across New South Wales and the Australian Capital Territory. Factor analysis and expert consensus were used to refine items and establish GPR-CAG's internal structure. GPT1 and GPT2 competencies were analysed separately.Results: Data of 555 registrars undertaking GPT1 and 537 registrars undertaking GPT2 were included in analyses. A four-factor, 16-item solution was identified for GPT1 competencies (Cronbach's alpha range: 0.71-0.83) and a seven-factor 27-item solution for GPT2 competencies (Cronbach's alpha: 0.63-0.84). The emergent factor structures were clinically characterisable and resonant with existing medical education competency frameworks.Discussion: This study establishes initial evidence for the content validity and internal consistency of GPR-CAG. GPR-CAG appears to have utility as a formative GP training WBA instrument.
Collapse
Affiliation(s)
- Alison Fielding
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Katie Mulquiney
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | | | - Amanda Tapley
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Elizabeth Holliday
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Jean Ball
- Clinical Research Design IT and Statistical Support, Hunter Medical Research Institute, New Lambton, Australia
| | - Linda Klein
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Parker Magin
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| |
Collapse
|
20
|
Shankar S, Miller WC, Roberson ND, Hubley AM. Assessing Patient Motivation for Treatment: A Systematic Review of Available Tools, Their Measurement Properties, and Conceptual Definition. J Nurs Meas 2019; 27:177-209. [PMID: 31511404 DOI: 10.1891/1061-3749.27.2.177] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND AND PURPOSE Motivation is often reported by clinicians and researchers as a key factor related to treatment and health outcomes. This systematic review aims to (a) Identify and critically appraise tools that measure patient motivation for treatment and (b) determine how these tools define and evaluate motivation. METHODS Library databases and the search engine Google Scholar were examined. Identified tools measuring patient motivation for treatment and reported measurement properties were selected. RESULTS 14 peer-reviewed articles covering 12 different tools made the final selection. Quality was assessed using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) and a new measure checklist. Reliability evidence was predominantly estimated using internal consistency; validity evidence was limited, and responsiveness was seldom examined. Overall, quality ratings were poor or inadequately reported and serious methodological limitations were identified. A lack of conceptual foundation quality ratings as tools did not apply a theory related to motivation or have a clear definition of the construct of patient motivation. CONCLUSIONS A significant gap exists of available tools with adequate measurement properties that use relevant theoretical frameworks.
Collapse
Affiliation(s)
- Sneha Shankar
- Measurement, Evaluation and Research Methodology, Department of Educational and Counselling Psychology, and Special Education, Faculty of Education, University of British Columbia, Vancouver, Canada.,Department of Occupational Science and Occupational Therapy, Faculty of Medicine, University of British, Vancouver, Canada
| | - William C Miller
- Department of Occupational Science and Occupational Therapy, Faculty of Medicine, University of British, Vancouver, Canada.,Rehabilitation Research Program, GF Strong Rehabilitation Centre, Vancouver, Canada
| | - Nathan D Roberson
- Measurement, Evaluation and Research Methodology, Department of Educational and Counselling Psychology, and Special Education, Faculty of Education, University of British Columbia, Vancouver, Canada
| | - Anita M Hubley
- Measurement, Evaluation and Research Methodology, Department of Educational and Counselling Psychology, and Special Education, Faculty of Education, University of British Columbia, Vancouver, Canada
| |
Collapse
|
21
|
Bachmann L, Groenvik CKU, Hauge KW, Julnes S. Failing to Fail nursing students among mentors: A confirmatory factor analysis of the Failing to Fail scale. Nurs Open 2019; 6:966-973. [PMID: 31367420 PMCID: PMC6650756 DOI: 10.1002/nop2.276] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 12/19/2018] [Accepted: 03/12/2019] [Indexed: 11/23/2022] Open
Abstract
AIM The aim was to explore the psychometric properties with respect to the internal consistency reliability of the subject-specific questionnaire "Failing to Fail." DESIGN Cross-sectional study. METHODS Exploratory factor analysis with varimax rotation. A confirmatory factor analysis was used to examine the factor structure of the "Failing to Fail" scale. The sample included 336 Norwegian nurse mentors. RESULTS The confirmatory factor analysis confirmed a five-factor structure of the "Failing to Fail" scale with adequate model fit. The factors were named as: (a) Insufficient mentoring competence; (b) Insufficient support in the working environment; (c) Emotional process dominates the assessment; (d) Insufficient support from the university; and (e) Decision-making detached from learning outcomes. The scale proved to be feasible to test whether mentors are Failing to Fail nursing students. The confirmatory factor analysis model supported the predictive validity of the "Failing to Fail" scale.
Collapse
Affiliation(s)
- Liv Bachmann
- Faculty of Health Sciences and Social Care at Molde University CollegeSpecialized University in LogisticsMoldeNorway
| | | | - Kari Westad Hauge
- Faculty of Health Sciences and Social Care at Molde University CollegeSpecialized University in LogisticsMoldeNorway
| | - Signe Julnes
- Faculty of Health Sciences and Social Care at Molde University CollegeSpecialized University in LogisticsMoldeNorway
| |
Collapse
|
22
|
Paul CR, Ryan MS, Dallaghan GLB, Jirasevijinda T, Quigley PD, Hanson JL, Khidir AM, Petershack J, Jackson J, Tewksbury L, Rocha MEM. Collecting Validity Evidence: A Hands-on Workshop for Medical Education Assessment Instruments. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2019; 15:10817. [PMID: 31139736 PMCID: PMC6507922 DOI: 10.15766/mep_2374-8265.10817] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
INTRODUCTION There is an increasing call for developing validity evidence in medical education assessment. The literature lacks a practical resource regarding an actual development process. Our workshop teaches how to apply principles of validity evidence to existing assessment instruments and how to develop new instruments that will yield valid data. METHODS The literature, consensus findings of curricula and content experts, and principles of adult learning guided the content and methodology of the workshop. The workshop underwent stringent peer review prior to presentation at one international and three national academic conferences. In the interactive workshop, selected domains of validity evidence were taught with sequential cycles of didactics, demonstration, and deliberate practice with facilitated feedback. An exercise guide steered participants through a stepwise approach. Using Likert-scale items and open-response questions, an evaluation form rated the workshop's effectiveness, captured details of how learners reached the objectives, and determined participants' plans for future work. RESULTS The workshop demonstrated generalizability with successful implementation in diverse settings. Sixty-five learners, the majority being clinician-educators, completed evaluations. Learners rated the workshop favorably for each prompt. Qualitative comments corroborated the workshop's effectiveness. The active application and facilitated feedback components allowed learners to reflect in real time as to how they were meeting a particular objective. DISCUSSION This feasible and practical educational intervention fills a literature gap by showing the medical educator how to apply validity evidence to both existing and in-development assessment instruments. Thus, it holds the potential to significantly impact learner and, subsequently, patient outcomes.
Collapse
Affiliation(s)
- Caroline R. Paul
- Assistant Professor, Department of Pediatrics, University of Wisconsin School of Medicine and Public Health
- Corresponding author:
| | - Michael S. Ryan
- Associate Professor, Department of Pediatrics, Virginia Commonwealth University School of Medicine
| | - Gary L. Beck Dallaghan
- Research Associate Professor, Department of Pediatrics, University of North Carolina School of Medicine
| | | | - Patricia D. Quigley
- Assistant Professor, Department of Pediatrics, Johns Hopkins University School of Medicine
| | - Janice L. Hanson
- Professor, Department of Pediatrics, University of Colorado School of Medicine
| | - Amal M. Khidir
- Associate Professor, Department of Pediatrics, Weill Cornell Medical College in Qatar
| | - Jean Petershack
- Professor, Department of Pediatrics, University of Texas Health Science Center at San Antonio
| | - Joseph Jackson
- Assistant Professor, Department of Pediatrics, Duke University Hospital
| | - Linda Tewksbury
- Professor, Department of Pediatrics, New York University School of Medicine
| | | |
Collapse
|
23
|
Bindels E, Boerebach B, van der Meulen M, Donkers J, van den Goor M, Scherpbier A, Lombarts K, Heeneman S. A New Multisource Feedback Tool for Evaluating the Performance of Specialty-Specific Physician Groups: Validity of the Group Monitor Instrument. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2019; 39:168-177. [PMID: 31306280 DOI: 10.1097/ceh.0000000000000262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
INTRODUCTION Since clinical practice is a group-oriented process, it is crucial to evaluate performance on the group level. The Group Monitor (GM) is a multisource feedback tool that evaluates the performance of specialty-specific physician groups in hospital settings, as perceived by four different rater classes. In this study, we explored the validity of this tool. METHODS We explored three sources of validity evidence: (1) content, (2) response process, and (3) internal structure. Participants were 254 physicians, 407 staff, 621 peers, and 282 managers of 57 physician groups (in total 479 physicians) from 11 hospitals. RESULTS Content was supported by the fact that the items were based on a review of an existing instrument. Pilot rounds resulted in reformulation and reduction of items. Four subscales were identified for all rater classes: Medical practice, Organizational involvement, Professionalism, and Coordination. Physicians and staff had an extra subscale, Communication. However, the results of the generalizability analyses showed that variance in GM scores could mainly be explained by the specific hospital context and the physician group specialty. Optimization studies showed that for reliable GM scores, 3 to 15 evaluations were needed, depending on rater class, hospital context, and specialty. DISCUSSION The GM provides valid and reliable feedback on the performance of specialty-specific physician groups. When interpreting feedback, physician groups should be aware that rater classes' perceptions of their group performance are colored by the hospitals' professional culture and/or the specialty.
Collapse
Affiliation(s)
- Elisa Bindels
- Ms. Bindels: PhD Candidate, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands, and Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Boerebach: Staff Advisor, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands. Ms. van der Meulen: PhD Candidate, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands, and Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Donkers: Assistant Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. van den Goor: PhD Candidate, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands, and Q3 Consult, Zeist, the Netherlands. Dr. Scherpbier: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Lombarts: Professor, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands. Dr. Heeneman: Professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
24
|
Lestra M, Marco J, Péran B, Martinez-Thomas M, Aït-Ali C. Let’s move forward with the transformative adult learning process. An experiment conducted at EuroPCR 2016. EUROINTERVENTION 2018; 14:e1262-e1267. [DOI: 10.4244/eijv14i12a228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
25
|
Artino AR, Phillips AW, Utrankar A, Ta AQ, Durning SJ. "The Questions Shape the Answers": Assessing the Quality of Published Survey Instruments in Health Professions Education Research. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:456-463. [PMID: 29095172 DOI: 10.1097/acm.0000000000002002] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE Surveys are widely used in health professions education (HPE) research, yet little is known about the quality of the instruments employed. Poorly designed survey tools containing unclear or poorly formatted items can be difficult for respondents to interpret and answer, yielding low-quality data. This study assessed the quality of published survey instruments in HPE. METHOD In 2017, the authors performed an analysis of HPE research articles published in three high-impact journals in 2013. They included articles that employed at least one self-administered survey. They designed a coding rubric addressing five violations of established best practices for survey item design and used it to collect descriptive data on the validity and reliability evidence reported and to assess the quality of available survey items. RESULTS Thirty-six articles met inclusion criteria and included the instrument for coding, with one article using 2 surveys, yielding 37 unique surveys. Authors reported validity and reliability evidence for 13 (35.1%) and 8 (21.6%) surveys, respectively. Results of the item-quality assessment revealed that a substantial proportion of published survey instruments violated established best practices in the design and visual layout of Likert-type rating items. Overall, 35 (94.6%) of the 37 survey instruments analyzed contained at least one violation of best practices. CONCLUSIONS The majority of articles failed to report validity and reliability evidence, and a substantial proportion of the survey instruments violated established best practices in survey design. The authors suggest areas of future inquiry and provide several improvement recommendations for HPE researchers, reviewers, and journal editors.
Collapse
Affiliation(s)
- Anthony R Artino
- A.R. Artino Jr is professor of medicine and deputy director of graduate programs in health professions education, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0003-2661-7853. A.W. Phillips is adjunct clinical professor of emergency medicine, Department of Emergency Medicine, University of North Carolina, Chapel Hill, North Carolina. A. Utrankar is a fourth-year medical student, Vanderbilt University School of Medicine, Nashville, Tennessee. A.Q. Ta is a second-year medical student, University of Illinois College of Medicine, Chicago, Illinois. S.J. Durning is professor of medicine and pathology and director of graduate programs in health professions education, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland
| | | | | | | | | |
Collapse
|
26
|
Miller B, Pellegrino JL. Measuring Intent to Aid of Lay Responders: Survey Development and Validation. HEALTH EDUCATION & BEHAVIOR 2017; 45:730-740. [PMID: 29271256 DOI: 10.1177/1090198117749257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Increasing lay responder cardiopulmonary resuscitation and automated external defibrillator use during sudden cardiac arrest depends on an individual's choice. Investigators designed and piloted an instrument to measure the affective domain of helping behaviors by applying the theory of planned behavior (TPB) to better understand lay responders' intent to use lifesaving skills. METHOD Questionnaire items were compiled into 10 behavioral domains informed by the TPB constructs followed by refinement via piloting and expert review. Two samples from an American Red Cross-trained lay-responder population ( N = 4,979) provided data for an exploratory (EFA, n = 235) and confirmatory (CFA, n = 198) factor analyses. EFA derived interitem relationships into factors and affective subscales. CFA yielded statistical validation of factors and subscales. RESULTS The EFA identified four factors, aligned with the TPB constructs of attitudes, norms, confidence, and intention to act to explain 57% of interitem variance. The internal consistency of factor-derived subscales ranged between 0.71 and 0.91. Reduction of instrument items went from 47 to 32 (32%). The CFA yielded good model fit with the switching of the legal ramification item from the social norm to intention construct. CONCLUSION The Intent to Aid (I2A) survey derived from this investigation aligned with the constructs of the TPB yielding four subscales. The I2A allows health education researchers to differentiate modalities and content impact on learner intention to act in a first aid (FA) emergency. I2A compliments cognitive and psychomotor measurements of learning outcomes. The experimental instrument aims to allow curricula developers and program evaluators a means of assessing the affective domain of human learning regarding intention-to-act in an FA emergency. In combination of with assessment of functional knowledge and essential skills, this instrument may provide curricula developers and health educators an avenue to better describe intention to act in an FA emergency.
Collapse
Affiliation(s)
- Brian Miller
- 1 The University of Akron, Akron, OH, USA.,2 Kent State University, Kent, OH, USA
| | | |
Collapse
|
27
|
van der Meulen MW, Boerebach BCM, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Arah OA, Lombarts KMJMH. Validation of the INCEPT: A Multisource Feedback Tool for Capturing Different Perspectives on Physicians' Professional Performance. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2017; 37:9-18. [PMID: 28212117 DOI: 10.1097/ceh.0000000000000143] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
INTRODUCTION Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. METHODS The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. RESULTS For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. DISCUSSION The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- Ms. van der Meulen: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Boerebach: Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Smirnova: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Heeneman: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. oude Egbrink: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. Arah: Professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, CA, and UCLA Center for Health Policy Research, Los Angeles, CA. Dr. Lombarts: Professor, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
28
|
Burk-Rafel J, Mullan PB, Wagenschutz H, Pulst-Korenberg A, Skye E, Davis MM. Scholarly Concentration Program Development: A Generalizable, Data-Driven Approach. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:S16-S23. [PMID: 27779505 DOI: 10.1097/acm.0000000000001362] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
PURPOSE Scholarly concentration programs-also known as scholarly projects, pathways, tracks, or pursuits-are increasingly common in U.S. medical schools. However, systematic, data-driven program development methods have not been described. METHOD The authors examined scholarly concentration programs at U.S. medical schools that U.S. News & World Report ranked as top 25 for research or primary care (n = 43 institutions), coding concentrations and mission statements. Subsequently, the authors conducted a targeted needs assessment via a student-led, institution-wide survey, eliciting learners' preferences for 10 "Pathways" (i.e., concentrations) and 30 "Topics" (i.e., potential content) augmenting core curricula at their institution. Exploratory factor analysis (EFA) and a capacity optimization algorithm characterized best institutional options for learner-focused Pathway development. RESULTS The authors identified scholarly concentration programs at 32 of 43 medical schools (74%), comprising 199 distinct concentrations (mean concentrations per program: 6.2, mode: 5, range: 1-16). Thematic analysis identified 10 content domains; most common were "Global/Public Health" (30 institutions; 94%) and "Clinical/Translational Research" (26 institutions; 81%). The institutional needs assessment (n = 468 medical students; response rate 60% overall, 97% among first-year students) demonstrated myriad student preferences for Pathways and Topics. EFA of Topic preferences identified eight factors, systematically related to Pathway preferences, informing content development. Capacity modeling indicated that offering six Pathways could guarantee 95% of first-year students (162/171) their first- or second-choice Pathway. CONCLUSIONS This study demonstrates a generalizable, data-driven approach to scholarly concentration program development that reflects student preferences and institutional strengths, while optimizing program diversity within capacity constraints.
Collapse
Affiliation(s)
- Jesse Burk-Rafel
- J. Burk-Rafel is a fourth-year medical student, University of Michigan Medical School, Ann Arbor, Michigan. P.B. Mullan is professor of medical education, Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan. H. Wagenschutz is codirector, Paths of Excellence, and codirector for leadership, University of Michigan Medical School, Ann Arbor, Michigan. A. Pulst-Korenberg is a resident physician, Department of Emergency Medicine, University of Washington Medical Center, Seattle, Washington. E. Skye is codirector, Paths of Excellence, house director, M-Home Learning Community, and associate professor, University of Michigan Medical School, Ann Arbor, Michigan. M.M. Davis is professor of pediatrics, division head of Academic General Pediatrics, and director of the Smith Child Health Research Center, Ann and Robert H. Lurie Children's Hospital, Northwestern Feinberg School of Medicine, Chicago, Illinois
| | | | | | | | | | | |
Collapse
|
29
|
Rodriguez AN, DeWitt P, Fisher J, Broadfoot K, Hurt KJ. Psychometric characterization of the obstetric communication assessment tool for medical education: a pilot study. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2016; 7:168-179. [PMID: 27289202 PMCID: PMC4912696 DOI: 10.5116/ijme.5740.4262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Accepted: 05/21/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE To characterize the psychometric properties of a novel Obstetric Communication Assessment Tool (OCAT) in a pilot study of standardized difficult OB communication scenarios appropriate for undergraduate medical evaluation. METHODS We developed and piloted four challenging OB Standardized Patient (SP) scenarios in a sample of twenty-one third year OB/GYN clerkship students: Religious Beliefs (RB), Angry Father (AF), Maternal Smoking (MS), and Intimate Partner Violence (IPV). Five trained Standardized Patient Reviewers (SPRs) independently scored twenty-four randomized video-recorded encounters using the OCAT. Cronbach's alpha and Intraclass Correlation Coefficient-2 (ICC-2) were used to estimate internal consistency (IC) and inter-rater reliability (IRR), respectively. Systematic variation in reviewer scoring was assessed using the Stuart-Maxwell test. RESULTS IC was acceptable to excellent with Cronbach's alpha values (and 95% Confidence Intervals [CI]): RB 0.91 (0.86, 0.95), AF 0.76 (0.62, 0.87), MS 0.91 (0.86, 0.95), and IPV 0.94 (0.91, 0.97). IRR was unacceptable to poor with ICC-2 values: RB 0.46 (0.40, 0.53), AF 0.48 (0.41, 0.54), MS 0.52 (0.45, 0.58), and IPV 0.67 (0.61, 0.72). Stuart-Maxwell analysis indicated systematic differences in reviewer stringency. CONCLUSIONS Our initial characterization of the OCAT demonstrates important issues in communications assessment. We identify scoring inconsistencies due to differences in SPR rigor that require enhanced training to improve assessment reliability. We outline a rational process for initial communication tool validation that may be useful in undergraduate curriculum development, and acknowledge that rigorous validation of OCAT training and implementation is needed to create a valuable OB communication assessment tool.
Collapse
Affiliation(s)
- A. Noel Rodriguez
- Department of Obstetrics and Gynecology, Divisions of Maternal Fetal Medicine and Reproductive Sciences, University of Colorado School of Medicine, Anschutz Medical Campus, Aurora, Colorado, USA
| | - Peter DeWitt
- Biostatistics and Informatics, Research Consulting Lab, Colorado School of Public Health, Aurora, Colorado, USA
| | - Jennifer Fisher
- Center for Advancing Professional Excellence (CAPE), University of Colorado School of Medicine, Aurora, Colorado, USA
| | - Kirsten Broadfoot
- Center for Advancing Professional Excellence (CAPE), University of Colorado School of Medicine, Aurora, Colorado, USA
| | - K. Joseph Hurt
- Department of Obstetrics and Gynecology, Divisions of Maternal Fetal Medicine and Reproductive Sciences, University of Colorado School of Medicine, Anschutz Medical Campus, Aurora, Colorado, USA
| |
Collapse
|
30
|
Nicholls D, Sweet L, Skuza P, Muller A, Hyett J. Sonographer Skill Teaching Practices Survey: Development and initial validation of a survey instrument. Australas J Ultrasound Med 2016; 19:109-117. [DOI: 10.1002/ajum.12011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Delwyn Nicholls
- Faculty of Medicine, Nursing & Health Sciences; Flinders University; Adelaide South Australia Australia
| | - Linda Sweet
- Faculty of Medicine, Nursing & Health Sciences; Flinders University; Adelaide South Australia Australia
| | - Pawel Skuza
- eResearch; Flinders University; Adelaide South Australia Australia
| | - Amanda Muller
- Faculty of Medicine, Nursing & Health Sciences; Flinders University; Adelaide South Australia Australia
| | - Jon Hyett
- RPA Women and Babies; Royal Prince Alfred Hospital; Camperdown New South Wales Australia
- Discipline of Obstetrics, Gynaecology and Neonatology; Faculty of Medicine; University of Sydney; Sydney New South Wales Australia
| |
Collapse
|
31
|
Silkens MEWM, Smirnova A, Stalmeijer RE, Arah OA, Scherpbier AJJA, Van Der Vleuten CPM, Lombarts KMJMH. Revisiting the D-RECT tool: Validation of an instrument measuring residents' learning climate perceptions. MEDICAL TEACHER 2016; 38:476-481. [PMID: 26172348 DOI: 10.3109/0142159x.2015.1060300] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
INTRODUCTION Credible evaluation of the learning climate requires valid and reliable instruments in order to inform quality improvement activities. Since its initial validation the Dutch Residency Educational Climate Test (D-RECT) has been increasingly used to evaluate the learning climate, yet it has not been tested in its final form and on the actual level of use - the department. AIM Our aim was to re-investigate the internal validity and reliability of the D-RECT at the resident and department levels. METHODS D-RECT evaluations collected during 2012-2013 were included. Internal validity was assessed using exploratory and confirmatory factor analyses. Reliability was assessed using generalizability theory. RESULTS In total, 2306 evaluations and 291 departments were included. Exploratory factor analysis showed a 9-factor structure containing 35 items: teamwork, role of specialty tutor, coaching and assessment, formal education, resident peer collaboration, work is adapted to residents' competence, patient sign-out, educational atmosphere, and accessibility of supervisors. Confirmatory factor analysis indicated acceptable to good fit. Three resident evaluations were needed to assess the overall learning climate reliably and eight residents to assess the subscales. CONCLUSION This study reaffirms the reliability and internal validity of the D-RECT in measuring residency training learning climate. Ongoing evaluation of the instrument remains important.
Collapse
Affiliation(s)
| | - Alina Smirnova
- a University of Amsterdam , The Netherlands
- b Maastricht University , The Netherlands
| | | | - Onyebuchi A Arah
- a University of Amsterdam , The Netherlands
- c University of California Los Angeles , USA
- d UCLA Center for Health Policy Research , USA
| | | | | | | |
Collapse
|
32
|
Cor MK. Trust me, it is valid: Research validity in pharmacy education research. CURRENTS IN PHARMACY TEACHING & LEARNING 2016; 8:391-400. [PMID: 30070250 DOI: 10.1016/j.cptl.2016.02.014] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2015] [Accepted: 02/02/2016] [Indexed: 06/08/2023]
Abstract
Research validity is a complex concept that is often used loosely or conflated with concepts in measurement validity in published quantitative pharmacy education literature. The problem begins with a lack of clarity of the distinction between four types of research validity including measurement, statistical conclusion, internal, and external validity (i.e., generalizability). In many cases published studies provide only incomplete discussions of measurement and external validity. The problem is exacerbated within the context of measurement validity where validation efforts are often reduced to statements about established levels of reliability. Ineffective discussions of research validity make it difficult to interpret study findings. After reading this article, the reader will be able to identify the different types research validity and discuss issues of research validity in quantitative pharmacy education research more completely.
Collapse
Affiliation(s)
- Mathew Kenneth Cor
- Faculty of Pharmacy and Pharmaceutical Sciences, University of Alberta, Edmonton, Alberta, Canada..
| |
Collapse
|
33
|
Myers B, Govender R, Koch JR, Manderscheid R, Johnson K, Parry CDH. Development and psychometric validation of a novel patient survey to assess perceived quality of substance abuse treatment in South Africa. Subst Abuse Treat Prev Policy 2015; 10:44. [PMID: 26545736 PMCID: PMC4636825 DOI: 10.1186/s13011-015-0040-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Accepted: 11/02/2015] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND A hybrid performance measurement system that combines patient-reported outcome data with administrative data has been developed for South African substance abuse treatment services. This paper describes the development and psychometric validation of one component of this system, the South African Addiction Treatment Services Assessment (SAATSA). METHODS First, a national steering committee identified five domains and corresponding indicators on which treatment quality should be assessed. A decision was made to develop a patient survey to assess several of these indicators. A stakeholder work group sourced survey items and generated additional items where appropriate. The feasibility and face validity of these items were examined during cognitive response testing with 16 patients. This led to the elimination of several items. Next, we conducted an initial psychometric validation of the SAATSA with 364 patients from residential and outpatient services. Exploratory (EFA) and confirmatory factor analyses (CFA) were conducted to assess the latent structure of the SAATSA. Findings highlighted areas where the SAATSA required revision. Following revision, we conducted another psychometric validation with an additional sample of 285 patients. We used EFA and CFA to assess construct validity and we assessed reliability using Cronbach's measure of internal consistency. RESULTS The final version of the SAATSA comprised 31 items (rated on a four-point response scale) that correspond to six scales. Four of these scales are patient-reported outcome measures (substance use, quality of life, social connectedness and HIV risk outcomes) that together assess the perceived effectiveness of treatment. The remaining two scales assess patients' perceptions of access to and quality of care. The models for the final revised scales had good fit and the internal reliability of these scales was good to excellent, with Cronbach's α ranging from 0.72 to 0.89. CONCLUSION A lack of adequate measurement tools hampers efforts to improve the quality of substance abuse treatment. Our preliminary evidence suggests that the SAATSA, a novel patient survey that assesses patients' perceptions of the outcomes and quality of substance abuse treatment, is a psychometrically robust tool that can help fill this void.
Collapse
Affiliation(s)
- Bronwyn Myers
- Alcohol, Tobacco and Other Drug Research Unit, South African Medical Research Council, Cape Town, South Africa.
- Department of Psychiatry and Mental Health, University of Cape Town, Cape Town, South Africa.
| | - Rajen Govender
- Department of Sociology, University of Cape Town, Cape Town, South Africa.
| | - J Randy Koch
- Department of Psychology, Virginia Commonwealth University, Richmond, VA, USA.
| | - Ron Manderscheid
- National Association of County Behavioral Health and Developmental Disability Directors, Washington DC, USA.
| | - Kim Johnson
- Alcohol, Tobacco and Other Drug Research Unit, South African Medical Research Council, Cape Town, South Africa.
| | - Charles D H Parry
- Alcohol, Tobacco and Other Drug Research Unit, South African Medical Research Council, Cape Town, South Africa.
- Department of Psychiatry, Stellenbosch University, Cape Town, South Africa.
| |
Collapse
|
34
|
Spruijt A, Leppink J, Wolfhagen I, Bok H, Mainhard T, Scherpbier A, van Beukelen P, Jaarsma D. Factors Influencing Seminar Learning and Academic Achievement. JOURNAL OF VETERINARY MEDICAL EDUCATION 2015; 42:259-270. [PMID: 26075625 DOI: 10.3138/jvme.1114-119r2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Many veterinary curricula use seminars, interactive educational group formats in which some 25 students discuss questions and issues relating to course themes. To get indications on how to optimize the seminar learning process for students, we aimed to investigate relationships between factors that seem to be important for the seminar learning process, and to determine how these seminar factors account for differences in students' achievement scores. A 57-item seminar evaluation (USEME) questionnaire was administered to students right after they attended a seminar. In total, 80 seminars distributed over years 1, 2, and 3 of an undergraduate veterinary medicine curriculum were sampled and 988 questionnaires were handed in. Principal factor analysis (PFA) was conducted on 410 questionnaires to examine which items could be grouped together as indicators of the same factor, and to determine correlations between the derived factors. Multilevel regression analysis was performed to explore the effects of these seminar factors and students' prior achievement scores on students' achievement scores. Within the questionnaire, four factors were identified that influence the seminar learning process: teacher performance, seminar content, student preparation, and opportunities for interaction within seminars. Strong correlations were found between teacher performance, seminar content, and group interaction. Prior achievement scores and, to a much lesser extent, the seminar factor group interaction appeared to account for differences in students' achievement scores. The factors resulting from the present study and their relation to the method of assessment should be examined further, for example, in an experimental setup.
Collapse
|
35
|
Smith NA, Castanelli DJ. Measuring the clinical learning environment in anaesthesia. Anaesth Intensive Care 2015; 43:199-203. [PMID: 25735685 DOI: 10.1177/0310057x1504300209] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The learning environment describes the way that trainees perceive the culture of their workplace. We audited the learning environment for trainees throughout Australia and New Zealand in the early stages of curriculum reform. A questionnaire was developed and sent electronically to a large random sample of Australian and New Zealand College of Anaesthetists trainees, with a 26% final response rate. This new instrument demonstrated good psychometric properties, with Cronbach's α ranging from 0.81 to 0.91 for each domain. The median score was equivalent to 78%, with the majority of trainees giving scores in the medium range. Introductory respondents scored their learning environment more highly than all other levels of respondents (P=0.001 for almost all comparisons). We present a simple questionnaire instrument that can be used to determine characteristics of the anaesthesia learning environment. The instrument can be used to help assess curricular change over time, alignment of the formal and informal curricula and strengths and weaknesses of individual departments.
Collapse
Affiliation(s)
- N A Smith
- Department of Anaesthesia, The Wollongong Hospital, Wollongong and Graduate School of Medicine, University of Wollongong, Wollongong, New South Wales
| | - D J Castanelli
- Department of Anaesthesia and Perioperative Medicine, Monash Medical Centre, Melbourne, Victoria
| |
Collapse
|
36
|
Cooke NK, Nietfeld JL, Goodell LS. The development and validation of the childhood obesity prevention self-efficacy (COP-SE) survey. Child Obes 2015; 11:114-21. [PMID: 25585108 DOI: 10.1089/chi.2014.0103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
BACKGROUND Physicians can play an important role in preventing and treating childhood obesity. There are currently no validated measures of medical students' self-efficacy in these skills; therefore, we sought to develop a valid and reliable computerized survey to measure medical students' self-efficacy in skills needed to prevent and treat childhood obesity. METHODS We developed the Childhood Obesity Prevention Self-Efficacy (COP-SE) survey with input from two expert panels and cognitive interviews with medical students. We administered the 43-item COP-SE computerized survey to a nation-wide sample of medical students. RESULTS The final sample consisted of 444 medical students from 53 medical schools. Exploratory factor analysis revealed a two-factor structure with a correlation of 0.637 between factors and high reliability within factors. The correlation between the COP-SE and a measure of general self-efficacy was moderate (0.648), and reliability within factors was high (Factor 1=0.946; Factor 2=0.927). CONCLUSIONS The 18-item COP-SE is a valid and reliable measure of childhood obesity prevention self-efficacy. Factor 1 assesses self-efficacy in nutrition counseling, and Factor 2 measures self-efficacy to assess readiness to change and initiate nutrition lifestyle changes. The correlation between the COP-SE and a measure of general self-efficacy indicates that the COP-SE is a distinct, valid assessment of domain-specific self-efficacy. The high reliability of items within factors indicates the items measure the same constructs. Therefore, medical schools can use this valid and reliable instrument as a formative or summative assessment of students' self-efficacy in childhood obesity prevention and treatment.
Collapse
Affiliation(s)
- Natalie K Cooke
- 1 Department of Food, Bioprocessing, and Nutrition Sciences, North Carolina State University , Raleigh, NC
| | | | | |
Collapse
|
37
|
Woitha K, Hasselaar J, van Beek K, Ahmed N, Jaspers B, Hendriks JCM, Radbruch L, Vissers K, Engels Y. Testing feasibility and reliability of a set of quality indicators to evaluate the organization of palliative care across Europe: a pilot study in 25 countries. Palliat Med 2015; 29:157-63. [PMID: 25634899 DOI: 10.1177/0269216314562100] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND A well-organized palliative care service is a prerequisite for offering good palliative care. Reliable and feasible quality indicators are needed to monitor the quality of their organization. AIM To test feasibility and reliability of a previously developed set of quality indicators in settings and services that provide palliative care across Europe. METHODS A total of 38 quality indicators, applicable in all types of settings, rated in a RAND Delphi process, and operationalized into 38 yes/no questions, were used. Descriptives statistics, factor and reliability analyses, analysis of variance, and chi-square analyses were used. DESIGN Cross-sectional online survey. SETTING/PARTICIPANTS Questionnaires were sent to representatives of 217 palliative care settings in 25 countries. Included settings were hospices, inpatient dedicated palliative care beds, palliative care outpatient clinics, palliative care units, day care centers for palliative care, palliative care home support teams, inpatient palliative care support teams, care homes, and nursing homes. RESULTS All invited 25 European Association of Palliative Care countries took part. In total, 107 out of 217 participants responded (57%). The quality indicators were reduced to four coherent sub-scales, being "equipment and continuity of care," "structured documentation of essential palliative care elements in the medical record," "training and appraisal of personnel," and "availability of controlled drugs." No significant differences in quality criteria between the different types of settings and services were identified. CONCLUSION The set of quality indicators appeared to measure four reliable domains that assess the organization of different palliative care settings. It can be used as a starting point for quality improvement activities.
Collapse
Affiliation(s)
- Kathrin Woitha
- Department of Anesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jeroen Hasselaar
- Department of Anesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Karen van Beek
- Department of Radiotherapy-Oncology and Palliative Medicine, University Hospital Leuven, Leuven, Belgium
| | - Nisar Ahmed
- Academic Unit of Supportive Care, School of Medicine and Biomedical Sciences, University of Sheffield, Sheffield, UK
| | - Birgit Jaspers
- Palliative Care Centre, Department of Palliative Medicine, Malteser Hospital Bonn/Rhein-Sieg, University of Bonn, Bonn, Germany Department of Palliative Medicine, Georg-August-University of Göttingen, Göttingen, Germany
| | - Jan C M Hendriks
- Department of Anesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Nijmegen, The Netherlands Department for Health Evidence, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Lukas Radbruch
- Palliative Care Centre, Department of Palliative Medicine, Malteser Hospital Bonn/Rhein-Sieg, University of Bonn, Bonn, Germany
| | - Kris Vissers
- Department of Anesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Yvonne Engels
- Department of Anesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
38
|
Ziff OJ, Samra M. Sink or swim: Near-peer teaching eases the transition into hospital-based medical education. MEDICAL TEACHER 2014; 37:603. [PMID: 25301148 DOI: 10.3109/0142159x.2014.970994] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Affiliation(s)
- Oliver J Ziff
- College of Medical and Dental Sciences, University of Birmingham , Birmingham B15 2SP , UK
| | | |
Collapse
|
39
|
Affiliation(s)
- Sue Roff
- Centre for Medical Education, Dundee University , Kirsty Semple Way, Dundee, DD2 4BF United Kingdom
| | | |
Collapse
|
40
|
Boerebach BCM, Lombarts KMJMH, Arah OA. Confirmatory Factor Analysis of the System for Evaluation of Teaching Qualities (SETQ) in Graduate Medical Training. Eval Health Prof 2014; 39:21-32. [DOI: 10.1177/0163278714552520] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians’ teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians’ teaching performance.
Collapse
Affiliation(s)
- Benjamin C. M. Boerebach
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Kiki M. J. M. H. Lombarts
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Onyebuchi A. Arah
- Department of Epidemiology, University of California, Los Angeles (UCLA), School of Public Health, Los Angeles, CA, USA
- UCLA Center for Health Policy Research, Los Angeles, CA, USA
| |
Collapse
|
41
|
Tan KT, Adzhahar FBB, Lim I, Chan M, Lim WS. Transactive memory system as a measure of collaborative practice in a geriatrics team: implications for continuing interprofessional education. J Interprof Care 2014; 28:239-45. [DOI: 10.3109/13561820.2014.901938] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
42
|
Jochemsen-van der Leeuw HGAR, van Dijk N, Wieringa-de Waard M. Assessment of the clinical trainer as a role model: a Role Model Apperception Tool (RoMAT). ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:671-7. [PMID: 24556764 PMCID: PMC4885572 DOI: 10.1097/acm.0000000000000169] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE Positive role modeling by clinical trainers is important for helping trainees learn professional and competent behavior. The authors developed and validated an instrument to assess clinical trainers as role models: the Role Model Apperception Tool (RoMAT). METHOD On the basis of a 2011 systematic review of the literature and through consultation with medical education experts and with clinical trainers and trainees, the authors developed 17 attributes characterizing a role model, to be assessed using a Likert scale. In 2012, general practice (GP) trainees, in their first or third year of postgraduate training, who attended a curriculum day at four institutes in different parts of the Netherlands, completed the RoMAT. The authors performed a principal component analysis on the data that were generated, and they tested the instrument's validity and reliability. RESULTS Of 328 potential GP trainees, 279 (85%) participated. Of these, 202 (72%) were female, and 154 (55%) were first-year trainees. The RoMAT demonstrated both content and convergent validity. Two components were extracted: "Caring Attitude" and "Effectiveness." Both components had high reliability scores (0.92 and 0.84, respectively). Less experienced trainees scored their trainers significantly higher on the Caring Attitude component. CONCLUSIONS The RoMAT proved to be a valid, reliable instrument for assessing clinical trainers' role-modeling behavior. Both components include an equal number of items addressing personal (Heart), teaching (Head), and clinical (Hands-on) qualities, thus demonstrating that competence in the "3Hs" is a condition for positive role modeling. Educational managers (residency directors) and trainees alike can use the RoMAT.
Collapse
Affiliation(s)
- H G A Ria Jochemsen-van der Leeuw
- Dr. Jochemsen-van der Leeuw is general practitioner and PhD student, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands. Dr. van Dijk is assistant professor, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands. Dr. Wieringa-de Waard is professor, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands
| | | | | |
Collapse
|
43
|
Wijnen-Meijer M, Van der Schaaf M, Booij E, Harendza S, Boscardin C, Van Wijngaarden J, Ten Cate TJ. An argument-based approach to the validation of UHTRUST: can we measure how recent graduates can be trusted with unfamiliar tasks? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:1009-27. [PMID: 23400369 DOI: 10.1007/s10459-013-9444-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2012] [Accepted: 01/16/2013] [Indexed: 05/21/2023]
Abstract
There is a need for valid methods to assess the readiness for clinical practice of medical graduates. This study evaluates the validity of Utrecht Hamburg Trainee Responsibility for Unfamiliar Situations Test (UHTRUST), an authentic simulation procedure to assess whether medical trainees are ready to be entrusted with unfamiliar clinical tasks near the highest level of Miller's pyramid. This assessment, in which candidates were judged by clinicians, nurses and standardized patients, addresses the question: can this trainee be trusted with unfamiliar clinical tasks? The aim of this paper is to provide a validity argument for this assessment procedure. We collected data from various sources during preparation and administration of a UHTRUST-assessment. In total, 60 candidates (30 from the Netherlands and 30 from Germany) participated. To provide a validity argument for the UHTRUST-assessment, we followed Kane's argument-based approach for validation. All available data were used to design a coherent and plausible argument. Considerable data was collected during the development of the assessment procedure. In addition, a generalizability study was conducted to evaluate the reliability of the scores given by assessors and to determine the proportion of variance accounted by candidates and assessors. It was found that most of Kane's validity assumptions were defendable with accurate and often parallel lines of backing. UHTRUST can be used to compare the readiness for clinical practice of medical graduates. Further exploration of the procedures for entrustment decisions is recommended.
Collapse
Affiliation(s)
- M Wijnen-Meijer
- Center for Research and Development of Education, University Medical Center Utrecht, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands,
| | | | | | | | | | | | | |
Collapse
|
44
|
Peeters MJ, Beltyukova SA, Martin BA. Educational testing and validity of conclusions in the scholarship of teaching and learning. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2013; 77:186. [PMID: 24249848 PMCID: PMC3831397 DOI: 10.5688/ajpe779186] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Accepted: 07/27/2013] [Indexed: 05/13/2023]
Abstract
Validity and its integral evidence of reliability are fundamentals for educational and psychological measurement, and standards of educational testing. Herein, we describe these standards of educational testing, along with their subtypes including internal consistency, inter-rater reliability, and inter-rater agreement. Next, related issues of measurement error and effect size are discussed. This article concludes with a call for future authors to improve reporting of psychometrics and practical significance with educational testing in the pharmacy education literature. By increasing the scientific rigor of educational research and reporting, the overall quality and meaningfulness of SoTL will be improved.
Collapse
Affiliation(s)
- Michael J. Peeters
- College of Pharmacy and Pharmaceutical Sciences, University of Toledo, Toledo, Ohio
| | | | - Beth A. Martin
- School of Pharmacy, University of Wisconsin-Madison, Madison, Wisconsin
| |
Collapse
|
45
|
Winning TA, Kinnell A, Wener ME, Mazurat N, J Schönwetter D. Validity of scores from communication skills instruments for patients and their dental student-clinicians. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2013; 17:93-100. [PMID: 23574186 DOI: 10.1111/eje.12015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/19/2012] [Indexed: 06/02/2023]
Abstract
The development of appropriate communication skills by healthcare providers is central to providing quality patient-centred care. Patients can provide valuable feedback to practitioners about their clinical communication. However, in oral health care, their involvement is uncommon and instruments specific for communication in oral health care have not been available. Recently, two complementary instruments have been developed by the Faculty of Dentistry, University of Manitoba for evaluating student-clinicians' clinical communication: one for patient evaluation and one for student self-evaluation. The aim of the current study was to provide validity evidence for the scores related to the internal structure of the revised 2007 versions of these instruments in two dental clinical/education contexts, namely the Universities of Manitoba, Canada (UM) and Adelaide, Australia (UA). The proposed factor structure and loadings, and their stability across contexts were assessed using confirmatory factor analysis, and the adequacy of the internal consistency reliability of the scores was analysed using Cronbach's alpha. The factor structure of the current 2007 versions of the patient and student instruments, derived from the previously developed longer versions of these instruments, was confirmed and was consistent across the two clinical/educational contexts. A model of partial invariance provided the best fit for these data due to variations in the magnitude of the factor loadings between sites. The internal consistency reliability of scores was high with a range of 0.88-0.97. In conclusion, the current study provides preliminary evidence regarding the validity of the scores of the current 2007 instruments, in terms of the internal structure, as measuring the five factors well. Replication of the factor structure of these instrument scores with more participants at both UA and other institutions is required.
Collapse
Affiliation(s)
- T A Winning
- School of Dentistry, The University of Adelaide, South Australia, Australia.
| | | | | | | | | |
Collapse
|
46
|
Lockyer J. Multisource feedback: can it meet criteria for good assessment? THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2013; 33:89-98. [PMID: 23775909 DOI: 10.1002/chp.21171] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
INTRODUCTION High-quality instruments are required to assess and provide feedback to practicing physicians. Multisource feedback (MSF) uses questionnaires from colleagues, coworkers, and patients to provide data. It enables feedback in areas of increasing interest to the medical profession: communication, collaboration, professionalism, and interpersonal skills. The purpose of the study was to apply the 7 assessment criteria as a framework to examine the quality of MSF instruments used to assess practicing physicians. METHODS The criteria for assessment (validity, reproducibility, equivalence, feasibility, educational effect, catalytic effect, and acceptability) were examined for 3 sets of instruments, drawing on published data. RESULTS Three MSF instruments with a sufficient body of research for inclusion-the Canadian Physician Achievement Review instruments and the United Kingdom's GMC and CFEP360 instruments-were examined. There was evidence that MSF has been assessed against all criteria except educational effects, although variably for some of the instruments. The greatest emphasis was on validity, reproducibility, and feasibility for all of the instruments. Assessments of the catalytic effect were not available for 1 of the 2 UK instruments and minimally examined for the other. Data about acceptability are implicit in the UK instruments from their endorsement by the Royal College of General Practice and explicitly examined in the Canadian instruments. DISCUSSION The 7 criteria provided a useful framework to assess the quality of MSF instruments and enable an approach to analyzing gaps in instrument assessment. These criteria are likely to be helpful in assessing other instruments used in medical education.
Collapse
Affiliation(s)
- Jocelyn Lockyer
- Departmentof Community Health Sciences, Faculty of Medicine, University of Calgary, Canada T2N 4Z6.
| |
Collapse
|