1
|
Matos Sousa R, Collares CF, Pereira VH. Longitudinal variation of correlations between different components of assessment within a medical school. BMC MEDICAL EDUCATION 2024; 24:850. [PMID: 39112948 PMCID: PMC11308138 DOI: 10.1186/s12909-024-05822-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 07/25/2024] [Indexed: 08/10/2024]
Abstract
BACKGROUND An assessment program should be inclusive and ensure that the various components of medical knowledge, clinical skills, and professionalism are assessed. The level and the variation over time in the strength of the correlation between these components of assessment is still a matter of study. Based on the meaningful learning theory and the integrated learning theory, we hypothesize that these components increase their connections during the medical school course. METHODS This is a retrospective cohort study that analyzed data collected for a 10-year period in one medical school. We included students from the 3rd to 6th year of medical school from 2011 to 2021. Three assessment components were addressed: Knowledge, Clinical Skills, and Professionalism. For data analysis, Pearson correlation coefficients (R) and R2 were calculated to study the correlation between variables and a z-test on Fisher's r-to-z was used to determine the differences between correlation coefficients. RESULTS 949 medical students were included in the study. The correlation between Clinical Skills and Professionalism showed a medium to strong association (Pearson's R ranging from 0.485 to 0.734), while the correlation between Knowledge and Professionalism was weaker but exhibited a steady evolution with Pearson's R fluctuating between 0.075 and 0.218. The Knowledge and Clinical Skills correlation became statistically significant from 2013 onwards and peaking at Pearson's R of 0.440 for the cohort spanning 2016-2019. We also revealed a strengthening of correlations between Professionalism and Clinical Skills from the beginning to the end of clinical training, but not with the knowledge component. CONCLUSIONS This analysis contributes to our understanding of the dynamics of correlations of different assessment components within an institution and provides a framework for how they interact and influence each other. TRIAL REGISTRATION This study was not a clinical trial, but a retrospective observational study, without health care interventions. Nevertheless, we provide herein the number of the study as submitted to the Ethics committee - CEICVS 146/2021.
Collapse
Affiliation(s)
- Rita Matos Sousa
- School of Medicine, University of Minho, Braga, 4710-057, Portugal.
| | - Carlos Fernando Collares
- School of Medicine, University of Minho, Braga, 4710-057, Portugal
- European Board of Medical Assessors, Cardiff, UK
- Inspirali Educação, São Paulo, Brazil
- Medical Education Unit, Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
- Faculdades Pequeno Príncipe, Curitiba, Brazil
| | | |
Collapse
|
2
|
Cianciolo AT, LaVoie N, Parker J. Machine Scoring of Medical Students' Written Clinical Reasoning: Initial Validity Evidence. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1026-1035. [PMID: 33637657 PMCID: PMC8243833 DOI: 10.1097/acm.0000000000004010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Developing medical students' clinical reasoning requires a structured longitudinal curriculum with frequent targeted assessment and feedback. Performance-based assessments, which have the strongest validity evidence, are currently not feasible for this purpose because they are time-intensive to score. This study explored the potential of using machine learning technologies to score one such assessment-the diagnostic justification essay. METHOD From May to September 2018, machine scoring algorithms were trained to score a sample of 700 diagnostic justification essays written by 414 third-year medical students from the Southern Illinois University School of Medicine classes of 2012-2017. The algorithms applied semantically based natural language processing metrics (e.g., coherence, readability) to assess essay quality on 4 criteria (differential diagnosis, recognition and use of findings, workup, and thought process); the scores for these criteria were summed to create overall scores. Three sources of validity evidence (response process, internal structure, and association with other variables) were examined. RESULTS Machine scores correlated more strongly with faculty ratings than faculty ratings did with each other (machine: .28-.53, faculty: .13-.33) and were less case-specific. Machine scores and faculty ratings were similarly correlated with medical knowledge, clinical cognition, and prior diagnostic justification. Machine scores were more strongly associated with clinical communication than were faculty ratings (.43 vs .31). CONCLUSIONS Machine learning technologies may be useful for assessing medical students' long-form written clinical reasoning. Semantically based machine scoring may capture the communicative aspects of clinical reasoning better than faculty ratings, offering the potential for automated assessment that generalizes to the workplace. These results underscore the potential of machine scoring to capture an aspect of clinical reasoning performance that is difficult to assess with traditional analytic scoring methods. Additional research should investigate machine scoring generalizability and examine its acceptability to trainees and educators.
Collapse
Affiliation(s)
- Anna T Cianciolo
- A.T. Cianciolo is associate professor of medical education, Southern Illinois University School of Medicine, Springfield, Illinois; ORCID: https://orcid.org/0000-0001-5948-9304
| | - Noelle LaVoie
- N. LaVoie is president, Parallel Consulting, Petaluma, California; ORCID: https://orcid.org/0000-0002-7013-3568
| | - James Parker
- J. Parker is senior research associate, Parallel Consulting, Petaluma, California
| |
Collapse
|
3
|
Ali SK, Baig LA, Violato C, Zahid O. Identifying a parsimonious model for predicting academic achievement in undergraduate medical education: A confirmatory factor analysis. Pak J Med Sci 2017; 33:903-908. [PMID: 29067063 PMCID: PMC5648962 DOI: 10.12669/pjms.334.12610] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 03/11/2017] [Accepted: 06/26/2017] [Indexed: 11/25/2022] Open
Abstract
OBJECTIVES This study was conducted to adduce evidence of validity for admissions tests and processes and for identifying a parsimonious model that predicts students' academic achievement in Medical College. METHODS Psychometric study done on admission data and assessment scores for five years of medical studies at Aga Khan University Medical College, Pakistan using confirmatory factor analysis (CFA) and structured equation modeling (SEM). Sample included 276 medical students admitted in 2003, 2004 and 2005. RESULTS The SEM supported the existence of covariance between verbal reasoning, science and clinical knowledge for predicting achievement in medical school employing Maximum Likelihood (ML) estimations (n=112). Fit indices: χ2 (21) = 59.70, p =<.0001; CFI=.873; RMSEA = 0.129; SRMR = 0.093. CONCLUSIONS This study shows that in addition to biology and chemistry which have been traditionally used as major criteria for admission to medical colleges in Pakistan; mathematics has proven to be a better predictor for higher achievements in medical college.
Collapse
Affiliation(s)
- Syeda Kauser Ali
- Syeda Kauser Ali, Associate Professor, Department for Educational Development, Aga Khan University, PO Box 3500, Stadium Road, Karachi-74800, Pakistan
| | - Lubna Ansari Baig
- Lubna Baig, Pro-VC and Dean, APPNA Institute of Public Health Jinnah Sindh Medical University, Rafiqui Shaheed Road Karachi, Pakistan
| | - Claudio Violato
- Claudio Violato, PhD. Professor, Medical Education, Wakeforest School of Medicine, Medical Center Boulevard \ Winston-Salem, NC 27157, USA
| | - Onaiza Zahid
- Onaiza Zahid, Research Assistant, APPNA Institute of Public Health Jinnah Sindh Medical University, Rafiqui Shaheed Road, Karachi, Pakistan
| |
Collapse
|
4
|
Park YS, Hyderi A, Bordage G, Xing K, Yudkowsky R. Inter-rater reliability and generalizability of patient note scores using a scoring rubric based on the USMLE Step-2 CS format. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:761-773. [PMID: 26757931 DOI: 10.1007/s10459-015-9664-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 12/22/2015] [Indexed: 06/05/2023]
Abstract
Recent changes to the patient note (PN) format of the United States Medical Licensing Examination have challenged medical schools to improve the instruction and assessment of students taking the Step-2 clinical skills examination. The purpose of this study was to gather validity evidence regarding response process and internal structure, focusing on inter-rater reliability and generalizability, to determine whether a locally-developed PN scoring rubric and scoring guidelines could yield reproducible PN scores. A randomly selected subsample of historical data (post-encounter PN from 55 of 177 medical students) was rescored by six trained faculty raters in November-December 2014. Inter-rater reliability (% exact agreement and kappa) was calculated for five standardized patient cases administered in a local graduation competency examination. Generalizability studies were conducted to examine the overall reliability. Qualitative data were collected through surveys and a rater-debriefing meeting. The overall inter-rater reliability (weighted kappa) was .79 (Documentation = .63, Differential Diagnosis = .90, Justification = .48, and Workup = .54). The majority of score variance was due to case specificity (13 %) and case-task specificity (31 %), indicating differences in student performance by case and by case-task interactions. Variance associated with raters and its interactions were modest (<5 %). Raters felt that justification was the most difficult task to score and that having case and level-specific scoring guidelines during training was most helpful for calibration. The overall inter-rater reliability indicates high level of confidence in the consistency of note scores. Designs for scoring notes may optimize reliability by balancing the number of raters and cases.
Collapse
Affiliation(s)
- Yoon Soo Park
- Department of Medical Education (MC 591), College of Medicine, University of Illinois at Chicago, 808 South Wood Street, 963 CMET, Chicago, IL, 60612-7309, USA.
| | - Abbas Hyderi
- Department of Family Medicine (MC 785), College of Medicine, University of Illinois at Chicago, 1819 West Polk Street, 150 CMW, Chicago, IL, 60612-7309, USA
| | - Georges Bordage
- Department of Medical Education (MC 591), College of Medicine, University of Illinois at Chicago, 808 South Wood Street, 963 CMET, Chicago, IL, 60612-7309, USA
| | - Kuan Xing
- Department of Medical Education (MC 591), College of Medicine, University of Illinois at Chicago, 808 South Wood Street, 963 CMET, Chicago, IL, 60612-7309, USA
| | - Rachel Yudkowsky
- Department of Medical Education (MC 591), College of Medicine, University of Illinois at Chicago, 808 South Wood Street, 963 CMET, Chicago, IL, 60612-7309, USA
| |
Collapse
|
5
|
Lisk K, Agur AMR, Woods NN. Exploring cognitive integration of basic science and its effect on diagnostic reasoning in novices. PERSPECTIVES ON MEDICAL EDUCATION 2016; 5:147-153. [PMID: 27246965 PMCID: PMC4908035 DOI: 10.1007/s40037-016-0268-2] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Integration of basic and clinical science knowledge is increasingly being recognized as important for practice in the health professions. The concept of 'cognitive integration' places emphasis on the value of basic science in providing critical connections to clinical signs and symptoms while accounting for the fact that clinicians may not spontaneously articulate their use of basic science knowledge in clinical reasoning. In this study we used a diagnostic justification test to explore the impact of integrated basic science instruction on novices' diagnostic reasoning process. Participants were allocated to an integrated basic science or clinical science training group. The integrated basic science group was taught the clinical features along with the underlying causal mechanisms of four musculoskeletal pathologies while the clinical science group was taught only the clinical features. Participants completed a diagnostic accuracy test immediately after initial learning, and one week later a diagnostic accuracy and justification test. The results showed that novices who learned the integrated causal mechanisms had superior diagnostic accuracy and better understanding of the relative importance of key clinical features. These findings further our understanding of cognitive integration by providing evidence of the specific changes in clinical reasoning when basic and clinical sciences are integrated during learning.
Collapse
Affiliation(s)
- Kristina Lisk
- Rehabilitation Sciences Institute, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
- The Wilson Centre for Health Professions Education, Toronto General Hospital, Toronto, Ontario, Canada.
| | - Anne M R Agur
- Rehabilitation Sciences Institute, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Department of Surgery, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Nicole N Woods
- Rehabilitation Sciences Institute, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Department of Family and Community Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- The Wilson Centre for Health Professions Education, Toronto General Hospital, Toronto, Ontario, Canada
| |
Collapse
|
6
|
Baker EA, Ledford CH, Fogg L, Way DP, Park YS. The IDEA Assessment Tool: Assessing the Reporting, Diagnostic Reasoning, and Decision-Making Skills Demonstrated in Medical Students' Hospital Admission Notes. TEACHING AND LEARNING IN MEDICINE 2015; 27:163-173. [PMID: 25893938 DOI: 10.1080/10401334.2015.1011654] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED Construct: Clinical skills are used in the care of patients, including reporting, diagnostic reasoning, and decision-making skills. Written comprehensive new patient admission notes (H&Ps) are a ubiquitous part of student education but are underutilized in the assessment of clinical skills. The interpretive summary, differential diagnosis, explanation of reasoning, and alternatives (IDEA) assessment tool was developed to assess students' clinical skills using written comprehensive new patient admission notes. BACKGROUND The validity evidence for assessment of clinical skills using clinical documentation following authentic patient encounters has not been well documented. Diagnostic justification tools and postencounter notes are described in the literature (1,2) but are based on standardized patient encounters. To our knowledge, the IDEA assessment tool is the first published tool that uses medical students' H&Ps to rate students' clinical skills. APPROACH The IDEA assessment tool is a 15-item instrument that asks evaluators to rate students' reporting, diagnostic reasoning, and decision-making skills based on medical students' new patient admission notes. This study presents validity evidence in support of the IDEA assessment tool using Messick's unified framework, including content (theoretical framework), response process (interrater reliability), internal structure (factor analysis and internal-consistency reliability), and relationship to other variables. RESULTS Validity evidence is based on results from four studies conducted between 2010 and 2013. First, the factor analysis (2010, n = 216) yielded a three-factor solution, measuring patient story, IDEA, and completeness, with reliabilities of .79, .88, and .79, respectively. Second, an initial interrater reliability study (2010) involving two raters demonstrated fair to moderate consensus (κ = .21-.56, ρ =.42-.79). Third, a second interrater reliability study (2011) with 22 trained raters also demonstrated fair to moderate agreement (intraclass correlations [ICCs] = .29-.67). There was moderate reliability for all three skill domains, including reporting skills (ICC = .53), diagnostic reasoning skills (ICC = .64), and decision-making skills (ICC = .63). Fourth, there was a significant correlation between IDEA rating scores (2010-2013) and final Internal Medicine clerkship grades (r = .24), 95% confidence interval (CI) [.15, .33]. CONCLUSIONS The IDEA assessment tool is a novel tool with validity evidence to support its use in the assessment of students' reporting, diagnostic reasoning, and decision-making skills. The moderate reliability achieved supports formative or lower stakes summative uses rather than high-stakes summative judgments.
Collapse
Affiliation(s)
- Elizabeth A Baker
- a Department of Internal Medicine , Rush University , Chicago , Illinois , USA
| | | | | | | | | |
Collapse
|
7
|
Williams RG, Klamen DL, Markwell SJ, Cianciolo AT, Colliver JA, Verhulst SJ. Variations in senior medical student diagnostic justification ability. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:790-798. [PMID: 24667511 DOI: 10.1097/acm.0000000000000215] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE To determine the diagnostic justification proficiency of senior medical students across a broad spectrum of cases with common chief complaints and diagnoses. METHOD The authors gathered diagnostic justification exercise data from the Senior Clinical Comprehensive Examination taken by Southern Illinois University School of Medicine's students from the classes of 2011 (n = 67), 2012 (n = 66), and 2013 (n = 79). After interviewing and examining standardized patients, students listed their key findings and diagnostic possibilities considered, and provided a written explanation of how they used key findings to move from their initial differential diagnoses to their final diagnosis. Two physician judges blindly rated responses. RESULTS Student diagnostic justification performance was highly variable from case to case and often rated below expectations. Of the students in the classes of 2011, 2012, and 2013, 57% (38/67), 23% (15/66), and 33% (26/79) were judged borderline or poor on diagnostic justification performance for more than 50% of the cases on the examination. CONCLUSIONS Student diagnostic justification performance was inconsistent across the range of cases, common chief complaints, and underlying diagnoses used in this study. More than 20% of students exhibited borderline or poor diagnostic justification performance on more than 50% of the cases. If these results are confirmed in other medical schools, attention needs to be directed to investigating new curricular methods that ensure deliberate practice of these competencies across the spectrum of common chief complaints and diagnoses and do not depend on the available mix of patients.
Collapse
Affiliation(s)
- Reed G Williams
- Dr. Williams is adjunct professor, Department of Surgery, Indiana University School of Medicine, Indianapolis, Indiana. He is emeritus professor of medical education and J. Roland Folse Professor of Surgical Education Research and Development Emeritus, Southern Illinois University School of Medicine, Springfield, Illinois. Dr. Klamen is associate dean for education and curriculum and professor and chair, Department of Medical Education, Southern Illinois University School of Medicine, Springfield, Illinois. Mr. Markwell is director of statistics and research consulting, Department of Surgery, Southern Illinois University School of Medicine, Springfield, Illinois. Dr. Cianciolo is assistant professor of medical education, Southern Illinois University School of Medicine, Springfield, Illinois. Dr. Colliver is professor of medical education emeritus and past director of statistics and research consulting, Southern Illinois University School of Medicine, Springfield, Illinois. Dr. Verhulst is professor and director of statistics and research informatics, Center for Clinical Research, Southern Illinois University School of Medicine, Springfield, Illinois
| | | | | | | | | | | |
Collapse
|
8
|
Park YS, Lineberry M, Hyderi A, Bordage G, Riddle J, Yudkowsky R. Validity evidence for a patient note scoring rubric based on the new patient note format of the United States Medical Licensing Examination. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:1552-1557. [PMID: 23969362 DOI: 10.1097/acm.0b013e3182a34b1e] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE This study examines validity evidence for the Patient Note Scoring Rubric, which was developed for a local graduation competency exam (GCE) to assess patient notes written in the new United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills format. The rubric was designed to measure three dimensions: Documentation, justified differential diagnosis (DDX), and Workup. METHOD Analyses used GCE data from 170 fourth-year medical students who completed five standardized patient (SP) cases in May 2012. Five physician raters each scored all responses for one case. Internal structure was examined using correlations between dimensions and between cases; a generalizability study was also conducted. Relationship to other variables was examined by correlating patient note scores with SP encounter scores. Consequence was assessed by comparing pass-fail rates between the rubric and the previous global rating. Response process was examined using rater feedback. RESULTS Correlations between scores from different dimensions ranged between 0.33 and 0.44. Reliability of scores based on the phi coefficient was 0.43; 15 cases were required to reach a phi coefficient of 0.70. Evidence of case specificity was found. Documentation scores were moderately correlated with SP scores for data gathering (r = 0.47, P < .001). There was no meaningful change in pass-fail rates. Raters' feedback indicated that they required more training for scoring the DDX and Workup dimensions. CONCLUSIONS There is initial validity evidence for use of this rubric to score local clinical exams that are based on the new USMLE patient note format.
Collapse
Affiliation(s)
- Yoon Soo Park
- Dr. Park is assistant professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois. Dr. Lineberry is assistant professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois. Dr. Hyderi is associate dean for curriculum and associate professor, Department of Family Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois. Dr. Bordage is professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois. Dr. Riddle is director of faculty development and research assistant professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois. Dr. Yudkowsky is director, Graham Clinical Performance Center, and associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | | | | | | | | | | |
Collapse
|
9
|
Riley SC, Morton J, Ray DC, Swann DG, Davidson DJ. An integrated model for developing research skills in an undergraduate medical curriculum: appraisal of an approach using student selected components. PERSPECTIVES ON MEDICAL EDUCATION 2013; 2:230-247. [PMID: 24037741 PMCID: PMC3792228 DOI: 10.1007/s40037-013-0079-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Student selected components (SSCs), at that time termed special study modules, were arguably the most innovative element in Tomorrow's Doctors (1993), the document from the General Medical Council that initiated the modernization of medical curricula in the UK. SSCs were proposed to make up one-third of the medical curriculum and provide students with choice, whilst allowing individual schools autonomy in how SSCs were utilized. In response, at the University of Edinburgh the undergraduate medical curriculum provides an integrated and sequential development and assessment of research skill learning outcomes, for all students in the SSC programme. The curriculum contains SSCs which provide choice to students in all 5 years. There are four substantial timetabled SSCs where students develop research skills in a topic and speciality of their choice. These SSCs are fully integrated and mapped with core learning outcomes and assessment, particularly with the 'Evidence-Based Medicine and Research' programme theme. These research skills are developed incrementally and applied fully in a research project in the fourth year. One-third of students also perform an optional intercalated one-year honours programme between years 2 and 3, usually across a wide range of honours schools at the biomedical science interface. Student feedback is insightful and demonstrates perceived attainment of research competencies. The establishment of these competencies is discussed in the context of enabling junior graduate doctors to be effective and confident at utilizing their research skills to effectively practice evidence-based medicine. This includes examining their own practice through clinical audit, developing an insight into the complexity of the evidence base and uncertainty, and also gaining a view into a career as a clinical academic.
Collapse
Affiliation(s)
- Simon C Riley
- Centre for Medical Education, Chancellor's Building, University of Edinburgh, Edinburgh, UK.
- MRC Centre for Reproductive Health, Queen's Medical Research Institute, 47 Little France Crescent, Edinburgh, EH16 4TJ, Scotland, UK.
| | - Jeremy Morton
- Anaesthesia and Critical Care, University of Edinburgh, Edinburgh, UK
| | - David C Ray
- Anaesthesia and Critical Care, University of Edinburgh, Edinburgh, UK
| | - David G Swann
- Anaesthesia and Critical Care, University of Edinburgh, Edinburgh, UK
| | - Donald J Davidson
- MRC Centre for Inflammation Research, University of Edinburgh, Edinburgh, UK
| |
Collapse
|