1
|
Young L, Vogelsmeier A. Quality Dashboards in Hospital Settings: A Systematic Review With Implications for Nurses. J Nurs Care Qual 2024; 39:188-194. [PMID: 37782907 DOI: 10.1097/ncq.0000000000000747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
BACKGROUND Dashboards visually display quality and safety data to aid nurses in making informed decisions. PURPOSE This systematic review evaluated quality improvement (QI) dashboard characteristics associated with interventions to improve patient outcomes and positive end-user evaluation. METHODS Literature was searched from 2012 to 2022 in PubMed, CINAHL, Scopus, MEDLINE, and Google Scholar. RESULTS Sixteen articles were included. Varied dashboard characteristics were noted, with mixed patient outcomes and end-user responses. Graphs and tabular presentations were associated with improved patient outcomes, whereas graphs were associated with end-user satisfaction. Benchmarks were noted with improved patient outcomes but not end-user satisfaction. Interactive dashboards were important for end users and improved patient outcomes. CONCLUSION Nurses can find dashboards helpful in guiding QI projects. Dashboards may include graphs and/or tables, benchmarks, and interactivity but should be useful, usable, and aligned to unit needs. Future research should focus on the use of quality dashboards in nursing practice.
Collapse
Affiliation(s)
- Lisa Young
- University of Missouri School of Nursing, Columbia, Missouri
| | | |
Collapse
|
2
|
Tian WM, Chang D, Pressley M, Muhammed M, Fong P, Webster W, Herbert G, Gallagher S, Watters CR, Yoo JS, Zani S, Agarwal S, Allen PJ, Seymour KA. Development of a prospective biliary dashboard to compare performance and surgical cost. Surg Endosc 2023; 37:8829-8840. [PMID: 37626234 DOI: 10.1007/s00464-023-10376-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/30/2023] [Indexed: 08/27/2023]
Abstract
BACKGROUND Transparency around surgeon level data may align healthcare delivery with quality care for patients. Biliary surgery includes numerous procedures performed by both general surgeons and subspecialists alike. Cholecystectomy is a common surgical procedure and an optimal cohort to measure quality outcomes within a healthcare system. METHODS Data were collected for 5084 biliary operations performed by 68 surgeons in 11 surgical divisions in a health system including a tertiary academic hospital, two regional community hospitals, and two ambulatory surgery centers. A privacy protected dashboard was developed to compare surgeon performance and cost between July 2018 and June 2022. A sample cohort of patients ≥ 18 years who underwent cholecystectomy were compared by operative time, cost, and 30-day outcomes. RESULTS Over 4 years, 4568 cholecystectomy procedures were performed by 57 surgeons. Operations were done by 57 surgeons in four divisions and included 3846 (84.2%) laparoscopic cholecystectomies, 601 (13.2%) laparoscopic cholecystectomies with cholangiogram, and 121 (2.6%) open cholecystectomies. Patients were admitted from the emergency room in 2179 (47.7%) cases while 2389 (52.3%) cases were performed in the ambulatory setting. Individual surgeons were compared to peers for volume, intraoperative data, cost, and outcomes. Cost was lowest at ambulatory surgery centers, yet only 4.2% of elective procedures were performed at these facilities. Prepackaged kits with indocyanine green were more expensive than cholangiograms that used iodinated contrast. The rate of emergency department visits was lowest when cases were performed at ambulatory surgery centers. CONCLUSION Data generated from clinical dashboards can inform surgeons as to how they compare to peers regarding quality metrics such as cost, time, and complications. In turn, this may guide strategies to standardize care, optimize efficiency, provide cost savings, and improve outcomes for cholecystectomy procedures. Future application of clinical dashboards can assist surgeons and administrators to define value-based care.
Collapse
Affiliation(s)
| | - Doreen Chang
- Department of Surgery, Duke University, Durham, NC, USA
| | - Melissa Pressley
- Performance Services, Duke University Health System, Durham, NC, USA
| | - Makala Muhammed
- Performance Services, Duke University Health System, Durham, NC, USA
| | - Philip Fong
- Department of Surgery, Duke University, Durham, NC, USA
| | - Wendy Webster
- Department of Surgery, Duke University, Durham, NC, USA
| | - Garth Herbert
- Department of Surgery, Duke University, Durham, NC, USA
| | | | | | - Jin S Yoo
- Department of Surgery, Duke University, Durham, NC, USA
| | - Sabino Zani
- Department of Surgery, Duke University, Durham, NC, USA
| | | | - Peter J Allen
- Department of Surgery, Duke University, Durham, NC, USA
| | - Keri A Seymour
- Department of Surgery, Duke University, Durham, NC, USA.
| |
Collapse
|
3
|
Vennemeyer S, Kinnear B, Gao A, Zhu S, Nattam A, Knopp MI, Warm E, Wu DT. User-Centered Evaluation and Design Recommendations for an Internal Medicine Resident Competency Assessment Dashboard. Appl Clin Inform 2023; 14:996-1007. [PMID: 38122817 PMCID: PMC10733060 DOI: 10.1055/s-0043-1777103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 10/25/2023] [Indexed: 12/23/2023] Open
Abstract
OBJECTIVES Clinical Competency Committee (CCC) members employ varied approaches to the review process. This makes the design of a competency assessment dashboard that fits the needs of all members difficult. This work details a user-centered evaluation of a dashboard currently utilized by the Internal Medicine Clinical Competency Committee (IM CCC) at the University of Cincinnati College of Medicine and generated design recommendations. METHODS Eleven members of the IM CCC participated in semistructured interviews with the research team. These interviews were recorded and transcribed for analysis. The three design research methods used in this study included process mapping (workflow diagrams), affinity diagramming, and a ranking experiment. RESULTS Through affinity diagramming, the research team identified and organized opportunities for improvement about the current system expressed by study participants. These areas include a time-consuming preprocessing step, lack of integration of data from multiple sources, and different workflows for each step in the review process. Finally, the research team categorized nine dashboard components based on rankings provided by the participants. CONCLUSION We successfully conducted user-centered evaluation of an IM CCC dashboard and generated four recommendations. Programs should integrate quantitative and qualitative feedback, create multiple views to display these data based on user roles, work with designers to create a usable, interpretable dashboard, and develop a strong informatics pipeline to manage the system. To our knowledge, this type of user-centered evaluation has rarely been attempted in the medical education domain. Therefore, this study provides best practices for other residency programs to evaluate current competency assessment tools and to develop new ones.
Collapse
Affiliation(s)
- Scott Vennemeyer
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Benjamin Kinnear
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Andy Gao
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Siyi Zhu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| | - Anunita Nattam
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
| | - Michelle I. Knopp
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
- Division of Hospital Medicine, Cincinnati Children's Hospital Medical Center, Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
| | - Eric Warm
- Department of Internal Medicine, College of Medicine, University of Cincinnati, Ohio, United States
| | - Danny T.Y. Wu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Ohio, United States
- Department of Pediatrics, College of Medicine, University of Cincinnati, Ohio, United States
- Medical Sciences Baccalaureate Program, College of Medicine, University of Cincinnati, Ohio, United States
- School of Design, College of Design, Architecture, Art, and Planning (DAAP), University of Cincinnati, Ohio, United States
| |
Collapse
|
4
|
Sreepada RS, Chang AC, West NC, Sujan J, Lai B, Poznikoff AK, Munk R, Froese NR, Chen JC, Görges M. Dashboard of Short-Term Postoperative Patient Outcomes for Anesthesiologists: Development and Preliminary Evaluation. JMIR Perioper Med 2023; 6:e47398. [PMID: 37725426 PMCID: PMC10548316 DOI: 10.2196/47398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Revised: 08/08/2023] [Accepted: 08/16/2023] [Indexed: 09/21/2023] Open
Abstract
BACKGROUND Anesthesiologists require an understanding of their patients' outcomes to evaluate their performance and improve their practice. Traditionally, anesthesiologists had limited information about their surgical outpatients' outcomes due to minimal contact post discharge. Leveraging digital health innovations for analyzing personal and population outcomes may improve perioperative care. BC Children's Hospital's postoperative follow-up registry for outpatient surgeries collects short-term outcomes such as pain, nausea, and vomiting. Yet, these data were previously not available to anesthesiologists. OBJECTIVE This quality improvement study aimed to visualize postoperative outcome data to allow anesthesiologists to reflect on their care and compare their performance with their peers. METHODS The postoperative follow-up registry contains nurse-reported postoperative outcomes, including opioid and antiemetic administration in the postanesthetic care unit (PACU), and family-reported outcomes, including pain, nausea, and vomiting, within 24 hours post discharge. Dashboards were iteratively co-designed with 5 anesthesiologists, and a department-wide usability survey gathered anesthesiologists' feedback on the dashboards, allowing further design improvements. A final dashboard version has been deployed, with data updated weekly. RESULTS The dashboard contains three sections: (1) 24-hour outcomes, (2) PACU outcomes, and (3) a practice profile containing individual anesthesiologist's case mix, grouped by age groups, sex, and surgical service. At the time of evaluation, the dashboard included 24-hour data from 7877 cases collected from September 2020 to February 2023 and PACU data from 8716 cases collected from April 2021 to February 2023. The co-design process and usability evaluation indicated that anesthesiologists preferred simpler designs for data summaries but also required the ability to explore details of specific outcomes and cases if needed. Anesthesiologists considered security and confidentiality to be key features of the design and most deemed the dashboard information useful and potentially beneficial for their practice. CONCLUSIONS We designed and deployed a dynamic, personalized dashboard for anesthesiologists to review their outpatients' short-term postoperative outcomes. This dashboard facilitates personal reflection on individual practice in the context of peer and departmental performance and, hence, the opportunity to evaluate iterative practice changes. Further work is required to establish their effect on improving individual and department performance and patient outcomes.
Collapse
Affiliation(s)
- Rama Syamala Sreepada
- Department of Anesthesiology, Pharmacology and Therapeutics, The University of British Columbia, Vancouver, BC, Canada
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
| | - Ai Ching Chang
- Department of Anesthesiology, Pharmacology and Therapeutics, The University of British Columbia, Vancouver, BC, Canada
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
| | - Nicholas C West
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
| | - Jonath Sujan
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
| | - Brendan Lai
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
| | - Andrew K Poznikoff
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
- Department of Anesthesia, BC Children's Hospital, Vancouver, BC, Canada
| | - Rebecca Munk
- Department of Anesthesiology, Kelowna General Hospital, Kelowna, BC, Canada
| | - Norbert R Froese
- Department of Anesthesiology, Pharmacology and Therapeutics, The University of British Columbia, Vancouver, BC, Canada
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
- Department of Anesthesia, BC Children's Hospital, Vancouver, BC, Canada
| | - James C Chen
- Department of Anesthesiology, Pharmacology and Therapeutics, The University of British Columbia, Vancouver, BC, Canada
- Department of Anesthesia, BC Children's Hospital, Vancouver, BC, Canada
| | - Matthias Görges
- Department of Anesthesiology, Pharmacology and Therapeutics, The University of British Columbia, Vancouver, BC, Canada
- Research Institute, BC Children's Hospital, Vancouver, BC, Canada
| |
Collapse
|
5
|
Hwan Kim S, Jin J, Sevinchan M, Davies A. How do automated reasoning features impact the usability of a clinical task management system? Development and usability testing of a prototype. Int J Med Inform 2023; 174:105067. [PMID: 37060639 DOI: 10.1016/j.ijmedinf.2023.105067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 02/08/2023] [Accepted: 04/05/2023] [Indexed: 04/17/2023]
Abstract
BACKGROUND Electronic clinical task management systems (ECTMSs) have been developed and adopted by care providers to improve care coordination. Some systems utilised automated reasoning (AR) to enable more intelligent task management functionalities, such as automated task allocation. Yet, the impact of such features on usability remains unclear. Poor usability of health information systems has been described to cause frustration and contribute to patient safety incidents. AIM To design AR features for an ECTMS and to evaluate their impact on usability. METHODS In this mixed methods study, four ECTMS feature prototypes were co-designed with two clinicians. For each prototype, one AR variant and one non-AR variant with equivalent functionalities were developed. A moderated usability testing was conducted with seven clinicians to obtain ease-of-use ratings of prototypes and measure task durations. Parameters related to demographics and attitudes of participants were obtained via a questionnaire. A framework analysis was performed to summarise qualitative feedback. To determine statistical relationships of study variables, Spearmańs rank coefficients were calculated and presented as a correlation matrix. RESULTS Three out of four prototypes received higher median ease-of-use ratings for AR variants and were associated with shorter average task durations. Multiple clinical use cases suitable for AR were identified. Preference for AR was found to moderately correlate with digital proficiency and prior experience with ECTMSs. Insufficient trust in automation, alert fatigue, and system customisation were identified as challenges in the adoption of AR features. CONCLUSIONS This study provides evidence for the potential of AR to enhance usability in ECTMSs. Consideration of psychological and organisational context of users in the feature design was found to be decisive for usability. Future research should explore implications for operational and clinical outcomes.
Collapse
Affiliation(s)
- Su Hwan Kim
- Institute of Health Informatics, University College London, 222 Euston Road, London NW1 2DA, UK; Division of Informatics, Imaging & Data Sciences, School of Health Sciences, The University of Manchester, Manchester, UK
| | - Jessica Jin
- Department of Pediatrics, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Meryem Sevinchan
- Department of Neurology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Alan Davies
- Division of Informatics, Imaging & Data Sciences, School of Health Sciences, The University of Manchester, Manchester, UK.
| |
Collapse
|
6
|
Usability Evaluation of Dashboards: A Systematic Literature Review of Tools. BIOMED RESEARCH INTERNATIONAL 2023; 2023:9990933. [PMID: 36874923 PMCID: PMC9977530 DOI: 10.1155/2023/9990933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 01/16/2023] [Accepted: 02/04/2023] [Indexed: 02/25/2023]
Abstract
Introduction In recent years, the use of dashboards in healthcare has been considered an effective approach for the visual presentation of information to support clinical and administrative decisions. Effective and efficient use of dashboards in clinical and managerial processes requires a framework for the design and development of tools based on usability principles. Objectives The present study is aimed at investigating the existing questionnaires used for the usability evaluation framework of dashboards and at presenting more specific usability criteria for evaluating dashboards. Methods This systematic review was conducted using PubMed, Web of Science, and Scopus, without any time restrictions. The final search of articles was performed on September 2, 2022. Data collection was performed using a data extraction form, and the content of selected studies was analyzed based on the dashboard usability criteria. Results After reviewing the full text of relevant articles, a total of 29 studies were selected according to the inclusion criteria. Regarding the questionnaires used in the selected studies, researcher-made questionnaires were used in five studies, while 25 studies applied previously used questionnaires. The most widely used questionnaires were the System Usability Scale (SUS), Technology Acceptance Model (TAM), Situation Awareness Rating Technique (SART), Questionnaire for User Interaction Satisfaction (QUIS), Unified Theory of Acceptance and Use of Technology (UTAUT), and Health Information Technology Usability Evaluation Scale (Health-ITUES), respectively. Finally, dashboard evaluation criteria, including usefulness, operability, learnability, ease of use, suitability for tasks, improvement of situational awareness, satisfaction, user interface, content, and system capabilities, were suggested. Conclusion General questionnaires that were not specifically designed for dashboard evaluation were mainly used in reviewed studies. The current study suggested specific criteria for measuring the usability of dashboards. When selecting the usability evaluation criteria for dashboards, it is important to pay attention to the evaluation objectives, dashboard features and capabilities, and context of use.
Collapse
|
7
|
Kuboki D, Kawahira H, Maeda Y, Oiwa K, Unoki T, Lefor AK, Sata N. An online feedback system for laparoscopic training during the COVID-19 pandemic: evaluation from the trainer perspective. Heliyon 2022; 8:e10303. [PMID: 35999836 PMCID: PMC9388291 DOI: 10.1016/j.heliyon.2022.e10303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 06/23/2022] [Accepted: 08/09/2022] [Indexed: 11/26/2022] Open
Abstract
Objective A system to provide feedback for laparoscopic training using an online conferencing system during the COVID-19 pandemic was developed. The purpose of this study is to evaluate this system from the trainer perspective. Design A procedural feedback system using an online conferencing system was devised. Setting Surgical training was observed using an online conferencing system (Zoom). Feedback was provided while viewing suture videos which are, as a feature of this system, pre-recorded. Feedback was then recorded. Trainer comments were then converted into text, summarized as feedback items, and sorted by suture phase which facilitates reflection. Trainers completed a questionnaire concerning the usability of the online feedback session. Results Eleven trainers were selected. Physicians had an average experience of 21.9 ± 5.9 years (mean ± standard deviation). The total number of feedback items obtained by classifying each phase was 32. Based on questionnaire results, 91% of trainers were accustomed to the use of Zoom, and 100% felt that online procedural education was useful. In questions regarding system effectiveness, more than 70% of trainers answered positively to all questions, and in questions about efficiency, more than 70% of trainers answered positively. Only 55% of the trainers felt that this system was easy to use, but 91% were satisfied as trainers. Conclusions The results of the questionnaire suggest that this system has high usability for training. This online system could be a useful tool for providing feedback in situations where face-to-face education is difficult.
Collapse
Affiliation(s)
- Daigo Kuboki
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University School of Medicine, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, Japan.,Department of Surgery, Kitaibaraki City Hospital, 1050, Sekimotoshimo, Sekinami-cho, Kitaibaraki-shi, Ibaraki, Japan
| | - Hiroshi Kawahira
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University School of Medicine, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, Japan.,Medical Simulation Center, Jichi Medical University School of Medicine, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, Japan
| | - Yoshitaka Maeda
- Medical Simulation Center, Jichi Medical University School of Medicine, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, Japan
| | - Kosuke Oiwa
- Department of Electrical Engineering and Electronics, Aoyama Gakuin University, 5-10-1, Fuchinobe, Chuo-ku, Sagamihara-shi, Kanagawa, Japan
| | - Teruhiko Unoki
- College of Foreign Studies, Kansai Gaidai University, 16-1, Nakamiyahigashino-cho, Hirakata-shi, Osaka, Japan
| | - Alan Kawarai Lefor
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University School of Medicine, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, Japan
| | - Naohiro Sata
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University School of Medicine, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, Japan
| |
Collapse
|
8
|
Analyzing historical and future acute neurosurgical demand using an AI-enabled predictive dashboard. Sci Rep 2022; 12:7603. [PMID: 35534601 PMCID: PMC9084272 DOI: 10.1038/s41598-022-11607-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 04/25/2022] [Indexed: 11/29/2022] Open
Abstract
Characterizing acute service demand is critical for neurosurgery and other emergency-dominant specialties in order to dynamically distribute resources and ensure timely access to treatment. This is especially important in the post-Covid 19 pandemic period, when healthcare centers are grappling with a record backlog of pending surgical procedures and rising acute referral numbers. Healthcare dashboards are well-placed to analyze this data, making key information about service and clinical outcomes available to staff in an easy-to-understand format. However, they typically provide insights based on inference rather than prediction, limiting their operational utility. We retrospectively analyzed and prospectively forecasted acute neurosurgical referrals, based on 10,033 referrals made to a large volume tertiary neurosciences center in London, U.K., from the start of the Covid-19 pandemic lockdown period until October 2021 through the use of a novel AI-enabled predictive dashboard. As anticipated, weekly referral volumes significantly increased during this period, largely owing to an increase in spinal referrals (p < 0.05). Applying validated time-series forecasting methods, we found that referrals were projected to increase beyond this time-point, with Prophet demonstrating the best test and computational performance. Using a mixed-methods approach, we determined that a dashboard approach was usable, feasible, and acceptable among key stakeholders.
Collapse
|
9
|
Sinabell I, Ammenwerth E. Agile, Easily Applicable, and Useful eHealth Usability Evaluations: Systematic Review and Expert-Validation. Appl Clin Inform 2022; 13:67-79. [PMID: 35263798 PMCID: PMC8906994 DOI: 10.1055/s-0041-1740919] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Background
Electronic health (eHealth) usability evaluations of rapidly developed eHealth systems are difficult to accomplish because traditional usability evaluation methods require substantial time in preparation and implementation. This illustrates the growing need for fast, flexible, and cost-effective methods to evaluate the usability of eHealth systems. To address this demand, the present study systematically identified and expert-validated rapidly deployable eHealth usability evaluation methods.
Objective
Identification and prioritization of eHealth usability evaluation methods suitable for agile, easily applicable, and useful eHealth usability evaluations.
Methods
The study design comprised a systematic iterative approach in which expert knowledge was contrasted with findings from literature. Forty-three eHealth usability evaluation methods were systematically identified and assessed regarding their ease of applicability and usefulness through semi-structured expert interviews with 10 European usability experts and systematic literature research. The most appropriate eHealth usability evaluation methods were selected stepwise based on the experts' judgements of their ease of applicability and usefulness.
Results
Of these 43 eHealth usability evaluation methods identified as suitable for agile, easily applicable, and useful eHealth usability evaluations, 10 were recommended by the experts based on their usefulness for rapid eHealth usability evaluations. The three most frequently recommended eHealth usability evaluation methods were Remote User Testing, Expert Review, and Rapid Iterative Test and Evaluation Method. Eleven usability evaluation methods, such as Retrospective Testing, were not recommended for use in rapid eHealth usability evaluations.
Conclusion
We conducted a systematic review and expert-validation to identify rapidly deployable eHealth usability evaluation methods. The comprehensive and evidence-based prioritization of eHealth usability evaluation methods supports faster usability evaluations, and so contributes to the ease-of-use of emerging eHealth systems.
Collapse
Affiliation(s)
- Irina Sinabell
- Department of Biomedical Computer Science and Mechatronics, Institute of Medical Informatics, UMIT, Private University of Health Sciences, Medical Informatics and Technology, Hall in Tirol, Austria
| | - Elske Ammenwerth
- Department of Biomedical Computer Science and Mechatronics, Institute of Medical Informatics, UMIT, Private University of Health Sciences, Medical Informatics and Technology, Hall in Tirol, Austria
| |
Collapse
|
10
|
Jonnalagadda P, Swoboda C, Singh P, Gureddygari H, Scarborough S, Dunn I, Doogan NJ, Fareed N. Developing Dashboards to Address Children's Health Disparities in Ohio. Appl Clin Inform 2022; 13:100-112. [PMID: 35081656 PMCID: PMC8791762 DOI: 10.1055/s-0041-1741482] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 11/27/2021] [Indexed: 01/28/2023] Open
Abstract
OBJECTIVES Social determinants of health (SDoH) can be measured at the geographic level to convey information about neighborhood deprivation. The Ohio Children's Opportunity Index (OCOI) is a composite area-level opportunity index comprised of eight health domains. Our research team has documented the design, development, and use cases of a dashboard solution to visualize OCOI. METHODS The OCOI is a multidomain index spanning the following eight domains: (1) family stability, (2) infant health, (3) children's health, (4) access, (5) education, (6) housing, (7) environment, and (8) criminal justice. Information on these eight domains is derived from the American Community Survey and other administrative datasets. Our team used the Tableau Desktop visualization software and applied a user-centered design approach to developing the two OCOI dashboards-main OCOI dashboard and OCOI-race dashboard. We also performed convergence analysis to visualize the census tracts where different health indicators simultaneously exist at their worst levels. RESULTS The OCOI dashboard has multiple, interactive components as follows: a choropleth map of Ohio displaying OCOI scores for a specific census tract, graphs presenting OCOI or domain scores to compare relative positions for tracts, and a sortable table to visualize scores for specific county and census tracts. A case study using the two dashboards for convergence analysis revealed census tracts in neighborhoods with low infant health scores and a high proportion of minority population. CONCLUSION The OCOI dashboards could assist health care leaders in making decisions that enhance health care delivery and policy decision-making regarding children's health particularly in areas where multiple health indicators exist at their worst levels.
Collapse
Affiliation(s)
- Pallavi Jonnalagadda
- CATALYST, Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Christine Swoboda
- CATALYST, Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Priti Singh
- CATALYST, Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Harish Gureddygari
- CATALYST, Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Seth Scarborough
- CATALYST, Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Ian Dunn
- The Ohio Colleges of Medicine Government Resource Center, Columbus, Ohio, United States
| | - Nathan J. Doogan
- The Ohio Colleges of Medicine Government Resource Center, Columbus, Ohio, United States
| | - Naleef Fareed
- CATALYST, Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research, College of Medicine, The Ohio State University, Columbus, Ohio, United States
- Department of Biomedical Informatics, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
11
|
Tsang JY, Peek N, Buchan I, van der Veer SN, Brown B. OUP accepted manuscript. J Am Med Inform Assoc 2022; 29:1106-1119. [PMID: 35271724 PMCID: PMC9093027 DOI: 10.1093/jamia/ocac031] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 02/08/2021] [Accepted: 02/24/2022] [Indexed: 11/26/2022] Open
Abstract
Objectives (1) Systematically review the literature on computerized audit and feedback (e-A&F) systems in healthcare. (2) Compare features of current systems against e-A&F best practices. (3) Generate hypotheses on how e-A&F systems may impact patient care and outcomes. Methods We searched MEDLINE (Ovid), EMBASE (Ovid), and CINAHL (Ebsco) databases to December 31, 2020. Two reviewers independently performed selection, extraction, and quality appraisal (Mixed Methods Appraisal Tool). System features were compared with 18 best practices derived from Clinical Performance Feedback Intervention Theory. We then used realist concepts to generate hypotheses on mechanisms of e-A&F impact. Results are reported in accordance with the PRISMA statement. Results Our search yielded 4301 unique articles. We included 88 studies evaluating 65 e-A&F systems, spanning a diverse range of clinical areas, including medical, surgical, general practice, etc. Systems adopted a median of 8 best practices (interquartile range 6–10), with 32 systems providing near real-time feedback data and 20 systems incorporating action planning. High-confidence hypotheses suggested that favorable e-A&F systems prompted specific actions, particularly enabled by timely and role-specific feedback (including patient lists and individual performance data) and embedded action plans, in order to improve system usage, care quality, and patient outcomes. Conclusions e-A&F systems continue to be developed for many clinical applications. Yet, several systems still lack basic features recommended by best practice, such as timely feedback and action planning. Systems should focus on actionability, by providing real-time data for feedback that is specific to user roles, with embedded action plans. Protocol Registration PROSPERO CRD42016048695.
Collapse
Affiliation(s)
- Jung Yin Tsang
- Corresponding Author: Jung Yin Tsang, Centre for Primary Care and Health Services Research, University of Manchester, 6th Floor Williamson Building, Oxford Road, Manchester M13 9PL, UK;
| | - Niels Peek
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- NIHR Greater Manchester Patient Safety Translational Research Centre (GMPSTRC), University of Manchester, Manchester, UK
- NIHR Applied Research Collaboration Greater Manchester, University of Manchester, Manchester, UK
| | - Iain Buchan
- Institute of Population Health, University of Liverpool, Liverpool, UK
| | - Sabine N van der Veer
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
| | - Benjamin Brown
- Centre for Health Informatics, Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, Manchester Academic Health Science Centre, The University of Manchester, Manchester, UK
- Centre for Primary Care and Health Services Research, University of Manchester, Manchester, UK
- NIHR Greater Manchester Patient Safety Translational Research Centre (GMPSTRC), University of Manchester, Manchester, UK
| |
Collapse
|
12
|
A Visual Dashboard to Monitor Restraint Use in Hospitalized Psychiatry Patients. Jt Comm J Qual Patient Saf 2021; 47:282-287. [PMID: 33648859 DOI: 10.1016/j.jcjq.2021.01.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/12/2021] [Accepted: 01/13/2021] [Indexed: 11/23/2022]
Abstract
BACKGROUND Restraint events are tracked using a duration rate as part of a national psychiatry quality reporting program and tracked annually. Visual dashboards can help track metrics in near real time but are not routinely used in psychiatric settings. METHODS This observational study sought to characterize restraint events by extracting electronic medical record data on restraint episodes between January 1, 2017, and December 31, 2019, in five inpatient units in one academic medical center. The data were also used to build a visual dashboard and calculate restraint metrics (duration and frequency) across locations and time. RESULTS A total of 540 distinct restraint events occurred during the study period. Highest restraint episode counts occurred during evening shift (54.8%), compared to daytime (37.2%) and nighttime (8.0%) shifts. Highest episode duration rates occurred in an adult unit (61.3% of total hours spent in restraints across all units), while highest episode counts occurred in the adolescent unit (48.3% of all restraint episodes). A visual dashboard with two views (summary and detailed) was created. The summary view integrates patient volume data (total patient hours per month) with total duration and number of episodes per month. The detailed view displays event frequency by hour of day, nursing shift, weekday, and patient length of stay at the time of restraint. CONCLUSIONS Visual dashboards can provide timely and efficient access to granular data elements and metrics related to restraint events, beyond the reporting requirement of a national quality program. Visual dashboards can reveal variations in restraint use and yield important opportunities for clinical quality improvement.
Collapse
|
13
|
Cheng DR, South M. Electronic Task Management System: A Pediatric Institution's Experience. Appl Clin Inform 2020; 11:839-845. [PMID: 33327035 DOI: 10.1055/s-0040-1721321] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Electronic medical task management systems (ETMs) have been adopted in health care institutions to improve health care provider communication. ETMs allow for the requesting and resolution of nonurgent tasks between clinicians of all craft groups. Visibility, ability to provide close-loop feedback, and a digital trail of all decisions and responsible clinicians are key features of ETMs. An embedded ETM within an integrated electronic health record (EHR) was introduced to the Royal Children's Hospital Melbourne on April 30, 2016. The ETM is used hospital-wide for nonurgent tasks 24 hours a day. It facilitates communication of nonurgent tasks between clinical staff, with an associated designated timeframe in which the task needs to be completed (2, 4, and 8 hours). OBJECTIVE This study aims to examine the usage of the ETM at our institution since its inception. METHODS ETM usage data from the first 3 years of use (April 2016 to April 2019) were extracted from the EHR. Data collected included age of patient, date and time of task request, ward, unit, type of task, urgency of task, requestor role, and time to completion. RESULTS A total of 136,481 tasks were placed via the ETM in the study period. There were approximately 125 tasks placed each day (24-hour period). The most common time of task placement was around 6:00 p.m. Task placement peaked at approximately 8 a.m., 2 p.m., and 9 p.m.-consistent with nursing shift change times. In total, 63.16% of tasks were placed outside business hours, indicating predominant usage for after-hours task communication. The ETM was most highly utilized by surgical units. The majority of tasks were ordered by nurses for medical staff to complete (97.01%). A significant proportion (98.79%) of tasks was marked as complete on the ETM, indicating closed-loop feedback after tasks were requested. CONCLUSION An ETM function embedded in our EHR has been highly utilized in our institution since its introduction. It has multiple benefits for the clinician in the form of efficiencies in workflow and improvement in communication and also workflow management. By allowing collection, tracking, audit, and prioritization of tasks, it also provides a stream of actionable data for quality-improvement activities.
Collapse
Affiliation(s)
- Daryl R Cheng
- Department of General Medicine, The Royal Children's Hospital Melbourne, Parkville, Victoria, Australia.,EMR Project Team, The Royal Children's Hospital Melbourne, Parkville, Victoria, Australia.,Department of Paediatrics, University of Melbourne, Melbourne, Victoria, Australia.,Murdoch Children's Research Institute, Parkville, Victoria, Australia
| | - Mike South
- Department of General Medicine, The Royal Children's Hospital Melbourne, Parkville, Victoria, Australia.,EMR Project Team, The Royal Children's Hospital Melbourne, Parkville, Victoria, Australia.,Department of Paediatrics, University of Melbourne, Melbourne, Victoria, Australia.,Murdoch Children's Research Institute, Parkville, Victoria, Australia
| |
Collapse
|
14
|
Pierce RP, Eskridge BR, Rehard L, Ross B, Day MA, Belden JL. The Effect of Electronic Health Record Usability Redesign on Annual Screening Rates in an Ambulatory Setting. Appl Clin Inform 2020; 11:580-588. [PMID: 32906152 DOI: 10.1055/s-0040-1715828] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVES Improving the usability of electronic health records (EHR) continues to be a focus of clinicians, vendors, researchers, and regulatory bodies. To understand the impact of usability redesign of an existing, site-configurable feature, we evaluated the user interface (UI) used to screen for depression, alcohol and drug misuse, fall risk, and the existence of advance directive information in ambulatory settings. METHODS As part of a quality improvement project, based on heuristic analysis, the existing UI was redesigned. Using an iterative, user-centered design process, several usability defects were corrected. Summative usability testing was performed as part of the product development and implementation cycle. Clinical quality measures reflecting rolling 12-month rates of screening were examined over 8 months prior to the implementation of the redesigned UI and 9 months after implementation. RESULTS Summative usability testing demonstrated improvements in task time, error rates, and System Usability Scale scores. Interrupted time series analysis demonstrated significant improvements in all screening rates after implementation of the redesigned UI compared with the original implementation. CONCLUSION User-centered redesign of an existing site-specific UI may lead to significant improvements in measures of usability and quality of patient care.
Collapse
Affiliation(s)
- Robert P Pierce
- Department of Family and Community Medicine, University of Missouri, Columbia, Missouri, United States
| | - Bernie R Eskridge
- Department of Child Health, University of Missouri, Columbia, Missouri, United States
| | - LeAnn Rehard
- Nursing Informatics, University of Missouri Health Care, Columbia, Missouri, United States
| | - Brandi Ross
- Tiger Institute, Cerner Corporation, Columbia, Missouri, United States
| | - Margaret A Day
- Department of Family and Community Medicine, University of Missouri, Columbia, Missouri, United States
| | - Jeffery L Belden
- Department of Family and Community Medicine, University of Missouri, Columbia, Missouri, United States.,Tiger Institute, Cerner Corporation, Columbia, Missouri, United States
| |
Collapse
|
15
|
Fareed N, Swoboda CM, Jonnalagadda P, Griesenbrock T, Gureddygari HR, Aldrich A. Visualizing Opportunity Index Data Using a Dashboard Application: A Tool to Communicate Infant Mortality-Based Area Deprivation Index Information. Appl Clin Inform 2020; 11:515-527. [PMID: 32757202 PMCID: PMC7406368 DOI: 10.1055/s-0040-1714249] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 06/09/2020] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND An area deprivation index (ADI) is a geographical measure that accounts for socioeconomic factors (e.g., crime, health, and education). The state of Ohio developed an ADI associated with infant mortality: Ohio Opportunity Index (OOI). However, a powerful tool to present this information effectively to stakeholders was needed. OBJECTIVES We present a real use-case by documenting the design, development, deployment, and training processes associated with a dashboard solution visualizing ADI data. METHODS The Opportunity Index Dashboard (OID) allows for interactive exploration of the OOI and its seven domains-transportation, education, employment, housing, health, access to services, and crime. We used a user-centered design approach involving feedback sessions with stakeholders, who included representatives from project sponsors and subject matter experts. We assessed the usability of the OID based on the effectiveness, efficiency, and satisfaction dimensions. The process of designing, developing, deploying, and training users in regard to the OID is described. RESULTS We report feedback provided by stakeholders for the OID categorized by function, content, and aesthetics. The OID has multiple, interactive components: choropleth map displaying OOI scores for a specific census tract, graphs presenting OOI or domain scores between tracts to compare relative positions for tracts, and a sortable table to visualize scores for specific county and census tracts. Changes based on parameter and filter selections are described using a general use-case. In the usability evaluation, the median task completion success rate was 83% and the median system usability score was 68. CONCLUSION The OID could assist health care leaders in making decisions that enhance care delivery and policy decision making regarding infant mortality. The dashboard helps communicate deprivation data across domains in a clear and concise manner. Our experience building this dashboard presents a template for developing dashboards that can address other health priorities.
Collapse
Affiliation(s)
- Naleef Fareed
- CATALYST – The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
- Department of Biomedical Informatics, College of Medicine, Institute for Behavioral Medicine Research, The Ohio State University, Columbus, Ohio, United States
| | - Christine M. Swoboda
- CATALYST – The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Pallavi Jonnalagadda
- CATALYST – The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
- Department of Biomedical Informatics, College of Medicine, Institute for Behavioral Medicine Research, The Ohio State University, Columbus, Ohio, United States
| | - Tyler Griesenbrock
- CATALYST – The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Harish R. Gureddygari
- CATALYST – The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Alison Aldrich
- CATALYST – The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|