1
|
Marcilly R, Zheng WY, Quindroit P, Pelayo S, Berdot S, Charpiat B, Corny J, Drouot S, Frery P, Leguelinel-Blache G, Mondet L, Potier A, Robert L, Ferret L, Baysari M. Comparison of the validity, perceived usefulness, and usability of I-MeDeSA and TEMAS, two tools to evaluate alert system usability. Int J Med Inform 2023; 175:105091. [PMID: 37182411 DOI: 10.1016/j.ijmedinf.2023.105091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/05/2023] [Accepted: 05/06/2023] [Indexed: 05/16/2023]
Abstract
OBJECTIVE Two tools are currently available in the literature to evaluate the usability of medication alert systems, the instrument for evaluating human factors principles in medication-related decision support alerts (I-MeDeSA) and the tool for evaluating medication alerting systems (TEMAS). This study aimed to compare their convergent validity, perceived usability, usefulness, strengths, and weaknesses, as well as users' preferences. METHOD To evaluate convergent validity, two experts mapped TEMAS' items against I-MeDeSA's items with respect to the usability dimensions they target. To assess perceived usability, usefulness, strengths, and weaknesses of both tools, staff with expertise in their medication alerting system were asked to use French versions of the TEMAS and I-MeDeSA. After the use of each tool, participants were asked to complete the System Usability Scale (SUS) and answer questions about the understandability and usefulness of each tool. Finally, participants were asked to name their preferred tool. Numeric scores were statistically compared. Free-text responses were analyzed using an inductive approach. RESULTS Forty-five participants from 10 hospitals took part in the study. In terms of convergent validity, I-MeDeSA focuses more on the usability of the graphical user interface while TEMAS considers a wider range of usability principles. Both tools have a fair level of perceived usability (I-MeDeSA' SUS score = 61.85 and TEMAS' SUS score = 62.87), but results highlight that revisions are necessary to both tools to improve their usability. Participants found TEMAS more useful than I-MeDeSA (t = -3.63, p =.005) and had a clear preference for TEMAS to identify problems in formative evaluation (39 of 45; 0.867, p <.001) and to compare the usability of alert systems during the procurement process (36 of 45; 0.8, p <.001). CONCLUSIONS The TEMAS is perceived as more useful and is preferred by participants. The I-MeDeSA seems more relevant for quick evaluations that focus on the graphical user interface. The TEMAS seems to be more suitable for in-depth usability evaluations of alert systems. Even if both tools are perceived to be equally usable, they suffer from wording, instructional, and organizational problems that hinder their use. The results of this study will be used to improve the design of I-MeDeSA and TEMAS.
Collapse
Affiliation(s)
- Romaric Marcilly
- Univ. Lille, CHU Lille, ULR 2694 - METRICS: Évaluation des technologies de santé et des pratiques médicales, F-59000 Lille, France; Inserm, CIC-IT 1403, F-59000 Lille, France.
| | - Wu-Yi Zheng
- Black Dog Institute, Randwick, NSW, Australia.
| | - Paul Quindroit
- Univ. Lille, CHU Lille, ULR 2694 - METRICS: Évaluation des technologies de santé et des pratiques médicales, F-59000 Lille, France.
| | - Sylvia Pelayo
- Univ. Lille, CHU Lille, ULR 2694 - METRICS: Évaluation des technologies de santé et des pratiques médicales, F-59000 Lille, France; Inserm, CIC-IT 1403, F-59000 Lille, France.
| | - Sarah Berdot
- Assistance Publique - Hôpitaux de Paris, Département de Pharmacie, Hôpital Européen Georges-Pompidou, Paris, France; Inserm, Cordeliers Research Centre, Université de Paris, Sorbonne Université, Paris, France; HeKA, Inria, Paris, France.
| | - Bruno Charpiat
- Pharmacie, Hôpital de la Croix-Rousse, Hospices Civils de Lyon, 69004 Lyon, France.
| | - Jennifer Corny
- Service de Pharmacie, Groupe Hospitalier Paris Saint-Joseph, Paris, France.
| | - Sylvain Drouot
- Clinical Pharmacy Department, Hôpital Bicêtre, APHP, Paris, France.
| | | | - Géraldine Leguelinel-Blache
- Desbrest Institute of Epidemiology and Public Health, Univ Montpellier, Inserm, Montpellier, France; Department of Pharmacy, CHU Nîmes, Univ Montpellier, Nîmes, France.
| | - Lisa Mondet
- Department of Pharmacy, CHU Amiens-Picardie, Amiens, France.
| | - Arnaud Potier
- Service de pharmacie, CH de Luneville, 54300 Luneville, France; Service de pharmacie, CHRU de Nancy, 54000 Nancy, France.
| | - Laurine Robert
- Univ. Lille, CHU Lille, ULR 2694 - METRICS: Évaluation des technologies de santé et des pratiques médicales, F-59000 Lille, France; CHU Lille, Institut de Pharmacie, Lille, France.
| | - Laurie Ferret
- Department of Pharmacy, General hospital of Valenciennes, 59300, France.
| | - Melissa Baysari
- The University of Sydney, Faculty of Medicine and Health, School of Medical Sciences, Biomedical Informatics and Digital Health, Sydney, Australia.
| |
Collapse
|
2
|
Coiera E, Tong HL. Replication studies in the clinical decision support literature-frequency, fidelity, and impact. J Am Med Inform Assoc 2021; 28:1815-1825. [PMID: 34226931 PMCID: PMC8363796 DOI: 10.1093/jamia/ocab049] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/02/2021] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE To assess the frequency, fidelity, and impact of replication studies in the clinical decision support system (CDSS) literature. MATERIALS AND METHODS A PRISMA-compliant review identified CDSS replications across 28 health and biomedical informatics journals. Included articles were assessed for fidelity to the original study using 5 categories: Identical, Substitutable, In-class, Augmented, and Out-of-class; and 7 IMPISCO domains: Investigators (I), Method (M), Population (P), Intervention (I), Setting (S), Comparator (C), and Outcome (O). A fidelity score and heat map were generated using the ratings. RESULTS From 4063 publications matching search criteria for CDSS research, only 12/4063 (0.3%) were ultimately identified as replications. Six articles replicated but could not reproduce the results of the Han et al (2005) CPOE study showing mortality increase and, over time, changed from truth testing to generalizing this result. Other replications successfully tested variants of CDSS technology (2/12) or validated measurement instruments (4/12). DISCUSSION A replication rate of 3 in a thousand studies is low even by the low rates in other disciplines. Several new reporting methods were developed for this study, including the IMPISCO framework, fidelity scores, and fidelity heat maps. A reporting structure for clearly identifying replication research is also proposed. CONCLUSION There is an urgent need to better characterize which core CDSS principles require replication, identify past replication data, and conduct missing replication studies. Attention to replication should improve the efficiency and effectiveness of CDSS research and avoiding potentially harmful trial and error technology deployment.
Collapse
Affiliation(s)
- Enrico Coiera
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| | - Huong Ly Tong
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| |
Collapse
|
3
|
Marcilly R, Zheng WY, Beuscart R, Baysari MT. Comparison of the validity, perceived usefulness and usability of I-MeDeSA and TEMAS, two tools to evaluate alert system usability: a study protocol. BMJ Open 2021; 11:e050448. [PMID: 34353806 PMCID: PMC8344302 DOI: 10.1136/bmjopen-2021-050448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
INTRODUCTION Research has shown that improvements to the usability of medication alert systems are needed. For designers and decisions-makers to assess usability of their alert systems, two paper-based tools are currently available: the instrument for evaluating human-factors principles in medication-related decision support alerts (I-MeDeSA) and the tool for evaluating medication alerting systems (TEMAS). This study aims to compare the validity, usability and usefulness of both tools to identify their strengths and limitations and assist designers and decision-makers in making an informed decision about which tool is most suitable for assessing their current or prospective system. METHODS AND ANALYSIS First, TEMAS and I-MeDeSA will be translated into French. This translation will be validated by three experts in human factors. Then, in 12 French hospitals with a medication alert system in place, staff with expertise in the system will evaluate their alert system using the two tools successively. After the use of each tool, participants will be asked to fill in the System Usability Scale (SUS) and complete a survey on the understandability and perceived usefulness of each tool. Following the completion of both assessments, participants will be asked to nominate their preferred tool and relay their opinions on the tools. The design philosophy of TEMAS and I-MeDeSA differs on the calculation of a score, impacting the way the comparison between the tools can be performed. Convergent validity will be evaluated by matching the items of the two tools with respect to the usability dimensions they assess. SUS scores and answers to the survey will be statistically compared for I-MeDeSA and TEMAS to identify differences. Free-text responses in surveys will be analysed using an inductive approach. ETHICS AND DISSEMINATION Ethical approval is not required in France for a study of this nature. The results will be published in a peer-reviewed journal.
Collapse
Affiliation(s)
- Romaric Marcilly
- Univ. Lille, CHU Lille, ULR 2694 - METRICS: Évaluation des technologies de santé et des pratiques médicales, F-59000 Lille, France
- Inserm, CIC-IT 1403, F-59000 Lille, France
| | - Wu Yi Zheng
- Black Dog Institute, Randwick, New South Wales, Australia
| | - Regis Beuscart
- Univ. Lille, CHU Lille, ULR 2694 - METRICS: Évaluation des technologies de santé et des pratiques médicales, F-59000 Lille, France
| | - Melissa T Baysari
- The University of Sydney, Faculty of Medicine and Health, School of Medical Sciences, Biomedical Informatics and Digital Health, Sydney, New South Wales, Australia
| |
Collapse
|
4
|
Zheng WY, Van Dort B, Marcilly R, Day R, Burke R, Shakib S, Ku Y, Reid-Anderson H, Baysari M. A Tool for Evaluating Medication Alerting Systems: Development and Initial Assessment. JMIR Med Inform 2021; 9:e24022. [PMID: 34269680 PMCID: PMC8325080 DOI: 10.2196/24022] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 11/04/2020] [Accepted: 06/03/2021] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND It is well known that recommendations from electronic medication alerts are seldom accepted or acted on by users. Key factors affecting the effectiveness of medication alerts include system usability and alert design. Thus, human factors principles that apply knowledge of human capabilities and limitations are increasingly used in the design of health technology to improve the usability of systems. OBJECTIVE This study aims to evaluate a newly developed evidence-based self-assessment tool that allows the valid and reliable evaluation of computerized medication alerting systems. This tool was developed to be used by hospital staff with detailed knowledge of their hospital's computerized provider order entry system and alerts to identify and address potential system deficiencies. In this initial assessment, we aim to determine whether the items in the tool can measure compliance of medication alerting systems with human factors principles of design, the tool can be consistently used by multiple users to assess the same system, and the items are easy to understand and perceived to be useful for assessing medication alerting systems. METHODS The Tool for Evaluating Medication Alerting Systems (TEMAS) was developed based on human factors design principles and consisted of 66 items. In total, 18 staff members recruited across 6 hospitals used the TEMAS to assess their medication alerting systems. Data collected from participant assessments were used to evaluate the validity, reliability, and usability of the TEMAS. Validity was assessed by comparing the results of the TEMAS with those of prior in-house evaluations. Reliability was measured using Krippendorff α to determine agreement among assessors. A 7-item survey was used to determine usability. RESULTS The participants reported mostly negative (n=8) and neutral (n=7) perceptions of alerts in their medication alerting system. However, the validity of the TEMAS could not be directly tested, as participants were unaware of any results from prior in-house evaluations. The reliability of the TEMAS, as measured by Krippendorff α, was low to moderate (range 0.26-0.46); however, participant feedback suggests that individuals' knowledge of the system varied according to their professional background. In terms of usability, 61% (11/18) of participants reported that the TEMAS items were generally easy to understand; however, participants suggested the revision of 22 items to improve clarity. CONCLUSIONS This initial assessment of the TEMAS allowed the identification of its components that required modification to improve usability and usefulness. It also revealed that for the TEMAS to be effective in facilitating a comprehensive assessment of a medication alerting system, it should be completed by a multidisciplinary team of hospital staff from both clinical and technical backgrounds to maximize their knowledge of systems.
Collapse
Affiliation(s)
- Wu Yi Zheng
- Black Dog Institute, Randwick, NSW, Australia.,The University of Sydney, Faculty of Medicine and Health, School of Medical Sciences, Biomedical Informatics and Digital Health, Sydney, Australia
| | - Bethany Van Dort
- The University of Sydney, Faculty of Medicine and Health, School of Medical Sciences, Biomedical Informatics and Digital Health, Sydney, Australia
| | - Romaric Marcilly
- Univ Lille, CHU Lille, ULR 2694, METRICS: Évaluation des Technologies de santé des Pratiques médicales, Lille, France.,INSERM, CHU Lille, CIC-IT/Evalab 1403, Centre d'Investigation Clinique, Lille, France
| | - Richard Day
- University of New South Wales, Randwick, Australia
| | | | | | - Young Ku
- Hunter New England Local Health District, Newcastle, Australia
| | | | - Melissa Baysari
- The University of Sydney, Faculty of Medicine and Health, School of Medical Sciences, Biomedical Informatics and Digital Health, Sydney, Australia
| |
Collapse
|
5
|
Humphrey KE, Mirica M, Phansalkar S, Ozonoff A, Harper MB. Clinician Perceptions of Timing and Presentation of Drug-Drug Interaction Alerts. Appl Clin Inform 2020; 11:487-496. [PMID: 32698231 DOI: 10.1055/s-0040-1714276] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Alert presentation of clinical decision support recommendations is a common method for providing information; however, many alerts are overridden suggesting presentation design improvements can be made. This study attempts to assess pediatric prescriber information needs for drug-drug interactions (DDIs) alerts and to evaluate the optimal presentation timing and presentation in the medication ordering process. METHODS Six case scenarios presented interactions between medications used in pediatric specialties of general medicine, infectious disease, cardiology, and neurology. Timing varied to include alert interruption at medication selection versus order submission; or was noninterruptive. Interviews were audiotaped, transcribed, and independently analyzed to derive central themes. RESULTS Fourteen trainee and attending clinicians trained in pediatrics, cardiology, and neurology participated. Coders derived 8 central themes from 929 quotes. Discordance exists between medication prescribing frequency and DDI knowledge; providers may commonly prescribe medications for which they do not recognize DDIs. Providers wanted alerts at medication selection rather than at order signature. Alert presentation themes included standardizing text, providing interaction-specific incidence/risk information, DDI rating scales, consolidating alerts, and providing alternative therapies. Providers want alerts to be actionable, for example, allowing medication discontinuation and color visual cues for essential information. Despite alert volume, participants did not "mind being reminded because there is always the chance that at that particular moment (they) do not remember it" and acknowledged the importance of alerts as "essential in terms of patient safety." CONCLUSION Clinicians unanimously agreed on the importance of receiving DDI alerts to improve patient safety. The perceived alert value can be improved by incorporating clinician preferences for timing and presentation.
Collapse
Affiliation(s)
- Kate E Humphrey
- Patient Safety and Quality, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, United States
| | - Maria Mirica
- General Internal Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
| | - Shobha Phansalkar
- Division of General Medicine and Primary Care, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
| | - Al Ozonoff
- Precision Vaccines Program, Division of Infectious Diseases, Boston Children's Hospital, Boston, Massachusetts, United States.,Department of Pediatrics, Harvard Medical School, Boston, Massachusetts, United States
| | - Marvin B Harper
- Emergency Medicine, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
6
|
Bagri H, Dahri K, Legal M. Hospital Pharmacists' Perceptions and Decision-Making Related to Drug-Drug Interactions. Can J Hosp Pharm 2019; 72:288-294. [PMID: 31452540 PMCID: PMC6699872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND Pharmacists often overlook drug interaction alerts because of limitations in clinical decision support (CDS) software systems intended to detect evidence-based, clinically significant drug-drug interactions (DDIs). Alert fatigue, which occurs when pharmacists become desensitized to an overload of DDIs, may also contribute. OBJECTIVES To gain a better understanding of how pharmacists assess common DDIs and the extent to which computerized drug alerts affect their decision-making, as background for initiatives to overcome alert fatigue and improve detection of DDIs. METHODS This qualitative study used focus group methodology. A structured focus group was planned at each of 3 large tertiary hospitals. Pharmacists were invited to participate if their jobs included patient care and/or dispensary responsibilities. The focus group discussions were audio-recorded and subsequently transcribed, analyzed, and coded into themes using NVivo software. Four main categories of themes were identified: perceived challenges, pharmacists' assessment of DDIs, barriers to responding to alerts, and proposed solutions. RESULTS The participants (n = 24) described a large discrepancy among CDS software systems in terms of the severity of specific DDIs, which made it difficult to view these systems as reliable sources. The participants agreed that alert fatigue is present and contributes to DDIs being overlooked. However, lack of patient information to make an initial assessment, as well as the constant need for multitasking, prevents pharmacists from focusing on the evaluation of DDIs. CONCLUSIONS Although alert fatigue was reported to be a common factor responsible for pharmacists missing DDIs, other barriers also exist. Participants suggested ways to limit DDI alerts to those that are clinically relevant. Having a collaborative team of pharmacists periodically review the DDIs embedded in the CDS system, incorporating a colour-code system, and removing duplicate entries were discussed as ways to improve system efficiency.
Collapse
Affiliation(s)
- Harkaryn Bagri
- , BSc, BScPharm, ACPR, is a Clinical Pharmacist with Surrey Memorial Hospital, Surrey, British Columbia
| | - Karen Dahri
- , BSc, BScPharm, PharmD, ACPR, BCPS, is a Clinical Pharmacotherapeutic Specialist (Internal Medicine) with Vancouver General Hospital and a Clinical Instructor with the Faculty of Pharmaceutical Sciences, The University of British Columbia, Vancouver, British Columbia
| | - Michael Legal
- , BScPharm, PharmD, ACPR, is a Clinical Pharmacy Specialist, Internal Medicine with St Paul's Hospital, and a Clinical Associate Professor with the Faculty of Pharmaceutical Sciences, The University of British Columbia, Vancouver, British Columbia
| |
Collapse
|
7
|
Lowenstein D, Zheng WY, Burke R, Kenny E, Sandhu A, Makeham M, Westbrook J, Day RO, Baysari MT. Do user preferences align with human factors assessment scores of drug-drug interaction alerts? Health Informatics J 2019; 26:563-575. [PMID: 30973280 DOI: 10.1177/1460458219840210] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study aimed to assess drug-drug interaction alert interfaces and to examine the relationship between compliance with human factors principles and user-preferences of alerts. Three reviewers independently evaluated drug-drug interaction alert interfaces in seven electronic systems using the Instrument-for-Evaluating-Human-Factors-Principles-in-Medication-Related-Decision-Support-Alerts (I-MeDeSA). Fifty-three doctors and pharmacists completed a survey to rate the alert interfaces from best to worst and reported on liked and disliked features. Human factors compliance and user-preferences of alerts were compared. Statistical analysis revealed no significant association between I-MeDeSA scores and user-preferences. However, the strengths and weaknesses of drug-drug interaction alerts from users' perspectives were in-line with the human factors constructs evaluated by the I-MeDeSA. I-MeDeSA in its current form, is unable to identify alerts that are preferred by the users. The design principles assessed by I-MeDeSA appear to be sound, but its arbitrary allocation of points to each human factors construct may not reflect the relative importance that the end-users place on different aspects of alert design.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Richard O Day
- UNSW Sydney, Australia; St Vincent's Hospital, Sydney, Australia
| | | |
Collapse
|