1
|
Nguyen QT, Yeh ML, Ngo LTH, Chen C. Translating and Validating the Vietnamese Version of the Health Sciences Evidence-Based Practice Questionnaire. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:5325. [PMID: 37047941 PMCID: PMC10093985 DOI: 10.3390/ijerph20075325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 02/15/2023] [Accepted: 02/24/2023] [Indexed: 06/19/2023]
Abstract
No validated instrument is available for assessing the evidence-based practice capacity of Vietnamese health professionals. This study aimed to translate and validate the Health Sciences Evidence-Based Practice questionnaire (HS-EBP) from English to Vietnamese and ascertain its psychometric properties. Data were collected from two obstetric hospitals in Vietnam. Participants: A total of 343 midwives were randomly selected. The HS-EBP questionnaire was translated by a group of bilingual experts into Vietnamese (HS-EBP-V). Content validity was assessed by two experts. Internal consistency and test-retest reliabilities were assessed using Cronbach's α and intraclass correlation (ICC), respectively. Construct validity was assessed using the contrasted groups approach. As a result, the content validity index of the HS-EBP-V reached 1.0. For the individual subscales, Cronbach's α was 0.92-0.97 and ICC was between 0.45 and 0.66. The validity of the contrasted-groups approach showed discrimination by a significant difference in the subscale scores among diploma holders compared with bachelor's degree holders (p < 0.001). The validation of the HS-EBP questionnaire indicated satisfactory psychometric properties. The results indicate that the HS-EBP is a reliable and valid instrument which assesses the competencies of as well as facilitators of and barriers to the five steps of EBP among midwives. The HS-EBP-V was deemed a reliable and validated tool for assessing the competency and application of EBP among Vietnamese healthcare professionals.
Collapse
Affiliation(s)
- Quyen Thao Nguyen
- School of Nursing, National Taipei University of Nursing and Health Sciences, 365 Mingde Road, Taipei City 112, Taiwan;
- Department of Midwifery, Faculty of Nursing and Medical Technology, University of Medicine and Pharmacy at Ho Chi Minh City, 201 Nguyen Chi Thanh Street, Ho Chi Minh City 70000, Vietnam;
| | - Mei-Ling Yeh
- School of Nursing, National Taipei University of Nursing and Health Sciences, 365 Mingde Road, Taipei City 112, Taiwan;
- Cochrane Taiwan, Taipei Medical University, 252 Wuxing Street, Taipei City 110, Taiwan;
| | - Ly Thi Hai Ngo
- Department of Midwifery, Faculty of Nursing and Medical Technology, University of Medicine and Pharmacy at Ho Chi Minh City, 201 Nguyen Chi Thanh Street, Ho Chi Minh City 70000, Vietnam;
| | - Chiehfeng Chen
- Cochrane Taiwan, Taipei Medical University, 252 Wuxing Street, Taipei City 110, Taiwan;
- Department of Public Health, School of Medicine, College of Medicine, Taipei Medical University, 252 Wuxing Street, Taipei City 110, Taiwan
- Division of Plastic Surgery, Department of Surgery, Evidence-Based Medicine Center, Wan Fang Hospital, Taipei Medical University, No. 111, Sec. 3, Xinglong Street, Taipei City 116, Taiwan
| |
Collapse
|
2
|
Roberge-Dao J, Maggio LA, Zaccagnini M, Rochette A, Shikako-Thomas K, Boruff J, Thomas A. Quality, methods, and recommendations of systematic reviews on measures of evidence-based practice: an umbrella review. JBI Evid Synth 2022; 20:1004-1073. [PMID: 35220381 DOI: 10.11124/jbies-21-00118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVES The objective of the review was to estimate the quality of systematic reviews on evidence-based practice measures across health care professions and identify differences between systematic reviews regarding approaches used to assess the adequacy of evidence-based practice measures and recommended measures. INTRODUCTION Systematic reviews on the psychometric properties of evidence-based practice measures guide researchers, clinical managers, and educators in selecting an appropriate measure for use. The lack of psychometric standards specific to evidence-based practice measures, in addition to recent findings suggesting the low methodological quality of psychometric systematic reviews, calls into question the quality and methods of systematic reviews examining evidence-based practice measures. INCLUSION CRITERIA We included systematic reviews that identified measures that assessed evidence-based practice as a whole or of constituent parts (eg, knowledge, attitudes, skills, behaviors), and described the psychometric evidence for any health care professional group irrespective of assessment context (education or clinical practice). METHODS We searched five databases (MEDLINE, Embase, CINAHL, PsycINFO, and ERIC) on January 18, 2021. Two independent reviewers conducted screening, data extraction, and quality appraisal following the JBI approach. A narrative synthesis was performed. RESULTS Ten systematic reviews, published between 2006 and 2020, were included and focused on the following groups: all health care professionals (n = 3), nurses (n = 2), occupational therapists (n = 2), physical therapists (n = 1), medical students (n = 1), and family medical residents (n = 1). The overall quality of the systematic reviews was low: none of the reviews assessed the quality of primary studies or adhered to methodological guidelines, and only one registered a protocol. Reporting of psychometric evidence and measurement characteristics differed. While all the systematic reviews discussed internal consistency, feasibility was only addressed by three. Many approaches were used to assess the adequacy of measures, and five systematic reviews referenced tools. Criteria for the adequacy of individual properties and measures varied, but mainly followed standards for patient-reported outcome measures or The Standards of Educational and Psychological Testing. Two hundred and four unique measures were identified across 10 reviews. One review explicitly recommended measures for occupational therapists, and four reviews identified adequate measures for all health care professionals (n = 3) and medical students (n = 1). The 27 measures deemed adequate by these five systematic reviews are described. CONCLUSIONS Our results suggest a need to improve the overall methodological quality and reporting of systematic reviews on evidence-based practice measures to increase the trustworthiness of recommendations and allow comprehensive interpretation by end-users. Risk of bias is common to all the included systematic reviews as the quality of primary studies was not assessed. The diversity of tools and approaches used to evaluate the adequacy of evidence-based practice measures reflects tensions regarding the conceptualization of validity, suggesting a need to reflect on the most appropriate application of validity theory to evidence-based practice measures. SYSTEMATIC REVIEW REGISTRATION NUMBER PROSPERO CRD42020160874.
Collapse
Affiliation(s)
- Jacqueline Roberge-Dao
- School of Physical and Occupational Therapy, McGill University, Montréal, QC, Canada Centre for Interdisciplinary Research in Rehabilitation of Greater Montréal, Montréal, Qc, Canada Medicine and Health Professions Education, Uniformed Services University, Bethesda, MD, USA School of Rehabilitation, Université de Montréal, Montréal, QC, Canada Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Montréal, QC, Canada
| | | | | | | | | | | | | |
Collapse
|
3
|
Verkest V, Pingnet L, Fransen E, Declau F. Multi-dimensionality of patient reported outcome measures in rhinoplasty satisfaction. Facial Plast Surg 2022; 38:468-476. [PMID: 35114725 DOI: 10.1055/a-1760-1422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
Background The FACE-Q rhinoplasty module (nose and nostrils), Utrecht Questionnaire and NOSE-scale are validated Dutch patient-reported outcome measures (PROMs) to evaluate rhinoplasty satisfaction. The objective of this study was to analyze the dimensionality of the measured variables in these four existing questionnaires. Additionally, we investigated the ability of the PROMS to measure change. Methods A prospective single center study was performed in a consecutive cohort of 106 Dutch-speaking patients. Patients were invited to fill in four PROMs: FACE-Q rhinoplasty module (nose and nostrils), Utrecht Questionnaire and NOSE-scale, preoperatively and 3 months postoperatively. Item quality was calculated in all four questionnaires. The ability of the questionnaires to differentiate between pre-and postoperative patients was determined with a binary logistic regression. Exploratory factor analysis was performed to determine the latent dimensions. Results Item quality was confirmed in all questionnaires. Backward binary logistic regression revealed that NOSE and FACE-Q nose module were the best discriminant factors pre- and postoperatively. Combination of these two questionnaires gave a specificity of 97,33% and a sensitivity of 94.52% to discriminate between pre-and post-operative cases. Exploratory factor analysis identified the presence of 4 dimensions: 1) cosmesis of the nose 2) cosmesis of the nostrils 3) nasal function and 4) psychosocial well-being in rhinoplasty patients. Lack of factorial invariance in the pre- as compared to the postoperative phase was detected, especially with the FACE-Q nose and to a lesser extent with the Utrecht questionnaire.
Collapse
Affiliation(s)
- Valérie Verkest
- Department of Otorhinolaryngology, GZA Ziekenhuizen Campus Sint-Vincentius, Antwerp, Belgium
| | - Laura Pingnet
- Department of Otorhinolaryngology, GZA Ziekenhuizen Campus Sint-Vincentius, Antwerp, Belgium.,NKO, University Hospital Antwerp, Edegem, Belgium
| | - Erik Fransen
- StatUa, Center of Statistics, University of Antwerp, Antwerp, Belgium
| | - Frank Declau
- Department of Otorhinolaryngology, GZA Ziekenhuizen Campus Sint-Vincentius, Antwerp, Belgium.,Department of Otorhinolaryngology, University of Antwerp Faculty of Medicine and Health Sciences, Wilrijk, Belgium
| |
Collapse
|
4
|
Imorde L, Möltner A, Runschke M, Weberschock T, Rüttermann S, Gerhardt-Szép S. Adaptation and validation of the Berlin questionnaire of competence in evidence-based dentistry for dental students: a pilot study. BMC MEDICAL EDUCATION 2020; 20:136. [PMID: 32366287 PMCID: PMC7197120 DOI: 10.1186/s12909-020-02053-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 04/22/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND The purpose of this pilot study was to create a valid and reliable set of assessment questions for examining Evidence-based Dentistry (EbD) knowledge. For this reason, we adapted and validated for dental students the Berlin Questionnaire (BQ), which assesses Evidence-based Medicine (EbM) abilities. METHODS The Berlin Questionnaire was validated with medical residents. We adapted it for use in a dentistry setting. An expert panel reviewed the adapted BQ for content validity. A cross-sectional cohort representing four training levels (EbD-novice dental students, EbD-trained dental students, dentists, and EbM-/EbD-expert faculty) completed the questionnaire. A total of 140 participants comprised the validation set. Internal reliability, item difficulty and item discrimination were assessed. Construct validity was assessed by comparing the mean total scores of students to faculty and comparing proportions of students and faculty who passed each item. RESULTS Among the 133 participants (52 EbD-novice dental students, 53 EbD-trained dental students, 12 dentists, and 16 EbM-/ EbD-expert faculty), a statistically significant (p < 0.001) difference was evident in the total score corresponding to the training level. The total score reliability and psychometric properties of items modified for discipline-specific content were acceptable. Cronbach's alpha was 0.648. CONCLUSION The adapted Berlin Questionnaire is a reliable and valid instrument to assess competence in Evidence-based Dentistry in dental students. Future research will focus on refining the instrument further.
Collapse
Affiliation(s)
- Laura Imorde
- Department of Operative Dentistry, Dental School (Carolinum), Goethe-University Frankfurt, Theodor-Stern-Kai 7/29, D-60596 Frankfurt am Main, Germany
| | - Andreas Möltner
- Center of Excellence for Assessment in Medicine, University of Heidelberg, Heidelberg, Germany
| | - Maren Runschke
- Department of Operative Dentistry, Dental School (Carolinum), Goethe-University Frankfurt, Theodor-Stern-Kai 7/29, D-60596 Frankfurt am Main, Germany
| | - Tobias Weberschock
- Institute of General Practice, Goethe University Frankfurt, Frankfurt am Main, Germany
- Department for Dermatology, University Hospital Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Stefan Rüttermann
- Department of Operative Dentistry, Dental School (Carolinum), Goethe-University Frankfurt, Theodor-Stern-Kai 7/29, D-60596 Frankfurt am Main, Germany
| | - Susanne Gerhardt-Szép
- Department of Operative Dentistry, Dental School (Carolinum), Goethe-University Frankfurt, Theodor-Stern-Kai 7/29, D-60596 Frankfurt am Main, Germany
| |
Collapse
|
5
|
Kumaravel B, Hearn JH, Jahangiri L, Pollard R, Stocker CJ, Nunan D. A systematic review and taxonomy of tools for evaluating evidence-based medicine teaching in medical education. Syst Rev 2020; 9:91. [PMID: 32331530 PMCID: PMC7183115 DOI: 10.1186/s13643-020-01311-y] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 02/24/2020] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND The importance of teaching the skills and practice of evidence-based medicine (EBM) for medical professionals has steadily grown in recent years. Alongside this growth is a need to evaluate the effectiveness of EBM curriculum as assessed by competency in the five 'A's': asking, acquiring, appraising, applying and assessing (impact and performance). EBM educators in medical education will benefit from a compendium of existing assessment tools for assessing EBM competencies in their settings. The purpose of this review is to provide a systematic review and taxonomy of validated tools that evaluate EBM teaching in medical education. METHODS We searched MEDLINE, EMBASE, Cochrane library, Educational Resources Information Centre (ERIC), Best Evidence Medical Education (BEME) databases and references of retrieved articles published between January 2005 and March 2019. We have presented the identified tools along with their psychometric properties including validity, reliability and relevance to the five domains of EBM practice and dimensions of EBM learning. We also assessed the quality of the tools to identify high quality tools as those supported by established interrater reliability (if applicable), objective (non-self-reported) outcome measures and achieved ≥ 3 types of established validity evidence. We have reported our study in accordance with the PRISMA guidelines. RESULTS We identified 1719 potentially relevant articles of which 63 full text articles were assessed for eligibility against inclusion and exclusion criteria. Twelve articles each with a unique and newly identified tool were included in the final analysis. Of the twelve tools, all of them assessed the third step of EBM practice (appraise) and four assessed just that one step. None of the twelve tools assessed the last step of EBM practice (assess). Of the seven domains of EBM learning, ten tools assessed knowledge gain, nine assessed skills and-one assessed attitude. None addressed reaction to EBM teaching, self-efficacy, behaviours or patient benefit. Of the twelve tools identified, six were high quality. We have also provided a taxonomy of tools using the CREATE framework, for EBM teachers in medical education. CONCLUSIONS Six tools of reasonable validity are available for evaluating most steps of EBM and some domains of EBM learning. Further development and validation of tools that evaluate all the steps in EBM and all educational outcome domains are needed. SYSTEMATIC REVIEW REGISTRATION PROSPERO CRD42018116203.
Collapse
Affiliation(s)
- Bharathy Kumaravel
- University of Buckingham Medical School, Hunter Street, Buckingham, MK18 1EG UK
| | - Jasmine Heath Hearn
- Department of Psychology, Manchester Metropolitan University, Brooks Building, 53 Bonsall Street, Manchester, M15 6GX UK
| | - Leila Jahangiri
- Department of Life Sciences, Birmingham City University, Birmingham, B15 3TN UK
| | - Rachel Pollard
- Franciscan Library, University of Buckingham, Buckingham, MK18 1EG UK
| | | | - David Nunan
- Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, Oxford, OX2 6GG UK
| |
Collapse
|
6
|
Buljan I, Jerončić A, Malički M, Marušić M, Marušić A. How to choose an evidence-based medicine knowledge test for medical students? Comparison of three knowledge measures. BMC MEDICAL EDUCATION 2018; 18:290. [PMID: 30514288 PMCID: PMC6278026 DOI: 10.1186/s12909-018-1391-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 11/16/2018] [Indexed: 05/27/2023]
Abstract
BACKGROUND There are a few studies of alignment between different knowledge-indices for evidence-based medicine (EBM). The aim of this study was to investigate whether the type of test used to assess knowledge of EBM affects the estimation of this knowledge in medical students. METHODS Medical students enrolled in 1-week EBM course were tested with the Fresno, Berlin, and ACE tests at the beginning and the end of the course. We evaluated the ability of these tests to detect a change in the acquired level of EBM knowledge and compared the estimates of change with those of the Control group that was tested with the ACE and Berlin tests before and after an unrelated non-EBM course. The distributions of test scores and average item difficulty indices were compared among the tests and the groups. RESULTS Test scores improved on all three tests when compared with their pre-test results and the control. Students had on average a "good" performance on the ACE test, "sufficient" performance on the Berlin test, and "insufficient" performance or have "not passed" on the Fresno test. The post-test improvements in performance on the Fresno test (median 31% increase in percent scores, 95% confidence interval (CI) 25-42%) outperformed those on the ACE (13, 95% CI 13-20%) and Berlin tests (13, 95% CI 7-20%). Post-test score distributions demonstrated that the ACE test had less potential to discriminate between levels of EBM knowledge than other tests. CONCLUSION The use of different EBM tests resulted in different assessment of general EBM knowledge in a sample of graduate medical students, with lowest results on the Fresno and highest on the ACE test. In the light of these findings, EBM knowledge assessment should be based on the course's content and learning objectives.
Collapse
Affiliation(s)
- Ivan Buljan
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2, 21000 Split, Croatia
| | - Ana Jerončić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2, 21000 Split, Croatia
| | - Mario Malički
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2, 21000 Split, Croatia
| | - Matko Marušić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2, 21000 Split, Croatia
| | - Ana Marušić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2, 21000 Split, Croatia
| |
Collapse
|
7
|
Albarqouni L, Hoffmann T, Glasziou P. Evidence-based practice educational intervention studies: a systematic review of what is taught and how it is measured. BMC MEDICAL EDUCATION 2018; 18:177. [PMID: 30068343 PMCID: PMC6090869 DOI: 10.1186/s12909-018-1284-1] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 07/19/2018] [Indexed: 05/21/2023]
Abstract
BACKGROUND Despite the established interest in evidence-based practice (EBP) as a core competence for clinicians, evidence for how best to teach and evaluate EBP remains weak. We sought to systematically assess coverage of the five EBP steps, review the outcome domains measured, and assess the properties of the instruments used in studies evaluating EBP educational interventions. METHODS We conducted a systematic review of controlled studies (i.e. studies with a separate control group) which had investigated the effect of EBP educational interventions. We used citation analysis technique and tracked the forward and backward citations of the index articles (i.e. the systematic reviews and primary studies included in an overview of the effect of EBP teaching) using Web of Science until May 2017. We extracted information on intervention content (grouped into the five EBP steps), and the outcome domains assessed. We also searched the literature for published reliability and validity data of the EBP instruments used. RESULTS Of 1831 records identified, 302 full-text articles were screened, and 85 included. Of these, 46 (54%) studies were randomised trials, 51 (60%) included postgraduate level participants, and 63 (75%) taught medical professionals. EBP Step 3 (critical appraisal) was the most frequently taught step (63 studies; 74%). Only 10 (12%) of the studies taught content which addressed all five EBP steps. Of the 85 studies, 52 (61%) evaluated EBP skills, 39 (46%) knowledge, 35 (41%) attitudes, 19 (22%) behaviours, 15 (18%) self-efficacy, and 7 (8%) measured reactions to EBP teaching delivery. Of the 24 instruments used in the included studies, 6 were high-quality (achieved ≥3 types of established validity evidence) and these were used in 14 (29%) of the 52 studies that measured EBP skills; 14 (41%) of the 39 studies that measured EBP knowledge; and 8 (26%) of the 35 studies that measured EBP attitude. CONCLUSIONS Most EBP educational interventions which have been evaluated in controlled studies focus on teaching only some of the EBP steps (predominantly critically appraisal of evidence) and did not use high-quality instruments to measure outcomes. Educational packages and instruments which address all EBP steps are needed to improve EBP teaching.
Collapse
Affiliation(s)
- Loai Albarqouni
- Centre for Research in Evidence Based Practice (CREBP), Faculty of Health Science and Medicine, Bond University, Gold Coast, Australia
| | - Tammy Hoffmann
- Centre for Research in Evidence Based Practice (CREBP), Faculty of Health Science and Medicine, Bond University, Gold Coast, Australia
| | - Paul Glasziou
- Centre for Research in Evidence Based Practice (CREBP), Faculty of Health Science and Medicine, Bond University, Gold Coast, Australia
| |
Collapse
|
8
|
Kortekaas MF, Bartelink MEL, Zuithoff NPA, van der Heijden GJMG, de Wit NJ, Hoes AW. Does integrated training in evidence-based medicine (EBM) in the general practice (GP) specialty training improve EBM behaviour in daily clinical practice? A cluster randomised controlled trial. BMJ Open 2016; 6:e010537. [PMID: 27625052 PMCID: PMC5030598 DOI: 10.1136/bmjopen-2015-010537] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
OBJECTIVES Evidence-based medicine (EBM) is an important element in the general practice (GP) specialty training. Studies show that integrating EBM training into clinical practice brings larger benefits than stand-alone modules. However, these studies have neither been performed in GP nor assessed EBM behaviour of former trainees in daily clinical practice. SETTING GP specialty training in the Netherlands. PARTICIPANTS All 82 third year GP trainees who started their final third year in 2011 were approached for inclusion, of whom 79 (96%) participated: 39 in the intervention group and 40 in the control group. INTERVENTION Integrated EBM training, in which EBM is embedded closely within the clinical context by joint assignments for the trainee and supervisor in daily practice, and teaching sessions based on dilemmas from actual patient consultations. COMPARISON Stand-alone EBM training at the institute only. PRIMARY AND SECONDARY OUTCOMES Our primary outcome was EBM behaviour, assessed by measuring guideline adherence (incorporating rational, motivated deviation) and information-seeking behaviour. Our secondary outcomes were EBM attitude and EBM knowledge. Data were acquired using logbooks and questionnaires, respectively. Analyses were performed using mixed models. RESULTS Logbook data were available from 76 (96%) of the participating trainees at baseline (7614 consultations), 60 (76%) at the end of the third year (T1, 4973 consultations) and 53 (67%) 1 year after graduation (T2, 3307 consultations). We found no significant differences in outcomes between the 2 groups, with relative risks for guideline adherence varying between 0.96 and 0.99 (95% CI 0.86 to 1.11) at T1, and 0.99 and 1.10 (95% CI 0.92 to 1.25) at T2, and for information-seeking behaviour between 0.97 and 1.16 (95% CI 0.70 to 1.91) and 0.90 and 1.10 (95% CI 0.70 to 1.32), respectively. CONCLUSIONS Integrated EBM training compared with stand-alone EBM training does not improve EBM behaviour, attitude or knowledge of (future) GPs.
Collapse
Affiliation(s)
- M F Kortekaas
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - M E L Bartelink
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - N P A Zuithoff
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - G J M G van der Heijden
- Department of Social Dentistry, Academic Centre for Dentistry Amsterdam, University of Amsterdam and VU University Amsterdam, Amsterdam, The Netherlands
| | - N J de Wit
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - A W Hoes
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| |
Collapse
|