1
|
Fareed N, Olvera RG, Wang Y, Hayes M, Larimore EL, Balvanz P, Langley R, Noel CA, Rock P, Redmond D, Neufeld J, Kosakowski S, Harris D, LaRochelle M, Huerta TR, Glasgow L, Oga E, Villani J, Wu E. Lessons Learned From Developing Dashboards to Support Decision-Making for Community Opioid Response by Community Stakeholders: Mixed Methods and Multisite Study. JMIR Hum Factors 2024; 11:e51525. [PMID: 39250216 PMCID: PMC11420584 DOI: 10.2196/51525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/08/2023] [Accepted: 05/05/2024] [Indexed: 09/10/2024] Open
Abstract
BACKGROUND Data dashboards are published tools that present visualizations; they are increasingly used to display data about behavioral health, social determinants of health, and chronic and infectious disease risks to inform or support public health endeavors. Dashboards can be an evidence-based approach used by communities to influence decision-making in health care for specific populations. Despite widespread use, evidence on how to best design and use dashboards in the public health realm is limited. There is also a notable dearth of studies that examine and document the complexity and heterogeneity of dashboards in community settings. OBJECTIVE Community stakeholders engaged in the community response to the opioid overdose crisis could benefit from the use of data dashboards for decision-making. As part of the Communities That HEAL (CTH) intervention, community data dashboards were created for stakeholders to support decision-making. We assessed stakeholders' perceptions of the usability and use of the CTH dashboards for decision-making. METHODS We conducted a mixed methods assessment between June and July 2021 on the use of CTH dashboards. We administered the System Usability Scale (SUS) and conducted semistructured group interviews with users in 33 communities across 4 states of the United States. The SUS comprises 10 five-point Likert-scale questions measuring usability, each scored from 0 to 4. The interview guides were informed by the technology adoption model (TAM) and focused on perceived usefulness, perceived ease of use, intention to use, and contextual factors. RESULTS Overall, 62 users of the CTH dashboards completed the SUS and interviews. SUS scores (grand mean 73, SD 4.6) indicated that CTH dashboards were within the acceptable range for usability. From the qualitative interview data, we inductively created subthemes within the 4 dimensions of the TAM to contextualize stakeholders' perceptions of the dashboard's usefulness and ease of use, their intention to use, and contextual factors. These data also highlighted gaps in knowledge, design, and use, which could help focus efforts to improve the use and comprehension of dashboards by stakeholders. CONCLUSIONS We present a set of prioritized gaps identified by our national group and list a set of lessons learned for improved data dashboard design and use for community stakeholders. Findings from our novel application of both the SUS and TAM provide insights and highlight important gaps and lessons learned to inform the design of data dashboards for use by decision-making community stakeholders. TRIAL REGISTRATION ClinicalTrials.gov NCT04111939; https://clinicaltrials.gov/study/NCT04111939.
Collapse
Affiliation(s)
- Naleef Fareed
- Department of Biomedical Informatics, College of Medicine, The Ohio State University, Columbus, OH, United States
| | - Ramona G Olvera
- Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, OH, United States
| | - Yiting Wang
- Department of Research Information Technology, College of Medicine, The Ohio State University, Columbus, OH, United States
| | - Michael Hayes
- Research Triangle Institute, Research Triangle Park, NC, United States
| | - Elizabeth Liz Larimore
- Center for Drug and Alcohol Research, University of Kentucky, Lexington, KY, United States
| | - Peter Balvanz
- Clinical Addiction Research and Evaluation Unit, Section of General Internal Medicine, Boston Medical Center, Boston, MA, United States
| | - Ronald Langley
- Center for Drug and Alcohol Research, University of Kentucky, Lexington, KY, United States
| | - Corinna A Noel
- Department of Public and Ecosystem Health, Cornell University, Ithaca, NY, United States
| | - Peter Rock
- Center for Drug and Alcohol Research, University of Kentucky, Lexington, KY, United States
| | - Daniel Redmond
- Institute for Biomedical Informatics, University of Kentucky, Kentucky, KY, United States
| | - Jessica Neufeld
- Social Intervention Group, School of Social Work, Columbia University, New York, NY, United States
| | - Sarah Kosakowski
- Clinical Addiction Research and Evaluation Unit, Section of General Internal Medicine, Boston Medical Center, Boston, MA, United States
| | - Daniel Harris
- Institute for Pharmaceutical Outcomes and Policy, University of Kentucky, Lexington, KY, United States
| | - Marc LaRochelle
- Clinical Addiction Research and Evaluation Unit, Section of General Internal Medicine, Boston Medical Center, Boston, MA, United States
| | - Timothy R Huerta
- Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, OH, United States
- Department of Research Information Technology, College of Medicine, The Ohio State University, Columbus, OH, United States
| | - LaShawn Glasgow
- Research Triangle Institute, Research Triangle Park, NC, United States
| | - Emmanuel Oga
- Department of Research Information Technology, College of Medicine, The Ohio State University, Columbus, OH, United States
| | | | - Elwin Wu
- Social Intervention Group, School of Social Work, Columbia University, New York, NY, United States
| |
Collapse
|
2
|
A Bounded Measure for Estimating the Benefit of Visualization (Part II): Case Studies and Empirical Evaluation. ENTROPY 2022; 24:e24020282. [PMID: 35205574 PMCID: PMC8871169 DOI: 10.3390/e24020282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 02/10/2022] [Accepted: 02/11/2022] [Indexed: 12/04/2022]
Abstract
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining the benefits of these visual representations. In practice, there is little doubt that these visual representations are useful. The recently-proposed information-theoretic measure for analyzing the cost–benefit ratio of visualization processes can explain such usefulness experienced in practice and postulate that the viewers’ knowledge can reduce the potential distortion (e.g., misinterpretation) due to information loss. This suggests that viewers’ knowledge can be estimated by comparing the potential distortion without any knowledge and the actual distortion with some knowledge. However, the existing cost–benefit measure consists of an unbounded divergence term, making the numerical measurements difficult to interpret. This is the second part of a two-part paper, which aims to improve the existing cost–benefit measure. Part I of the paper provided a theoretical discourse about the problem of unboundedness, reported a conceptual analysis of nine candidate divergence measures for resolving the problem, and eliminated three from further consideration. In this Part II, we describe two groups of case studies for evaluating the remaining six candidate measures empirically. In particular, we obtained instance data for (i) supporting the evaluation of the remaining candidate measures and (ii) demonstrating their applicability in practical scenarios for estimating the cost–benefit of visualization processes as well as the impact of human knowledge in the processes. The real world data about visualization provides practical evidence for evaluating the usability and intuitiveness of the candidate measures. The combination of the conceptual analysis in Part I and the empirical evaluation in this part allows us to select the most appropriate bounded divergence measure for improving the existing cost–benefit measure.
Collapse
|
3
|
A Bounded Measure for Estimating the Benefit of Visualization (Part I): Theoretical Discourse and Conceptual Evaluation. ENTROPY 2022; 24:e24020228. [PMID: 35205522 PMCID: PMC8870844 DOI: 10.3390/e24020228] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/26/2022] [Accepted: 01/27/2022] [Indexed: 12/10/2022]
Abstract
Information theory can be used to analyze the cost–benefit of visualization processes. However, the current measure of benefit contains an unbounded term that is neither easy to estimate nor intuitive to interpret. In this work, we propose to revise the existing cost–benefit measure by replacing the unbounded term with a bounded one. We examine a number of bounded measures that include the Jenson–Shannon divergence, its square root, and a new divergence measure formulated as part of this work. We describe the rationale for proposing a new divergence measure. In the first part of this paper, we focus on the conceptual analysis of the mathematical properties of these candidate measures. We use visualization to support the multi-criteria comparison, narrowing the search down to several options with better mathematical properties. The theoretical discourse and conceptual evaluation in this part provides the basis for further data-driven evaluation based on synthetic and experimental case studies that are reported in the second part of this paper.
Collapse
|
4
|
Fareed N, Swoboda CM, Chen S, Potter E, Wu DTY, Sieck CJ. U.S. COVID-19 State Government Public Dashboards: An Expert Review. Appl Clin Inform 2021; 12:208-221. [PMID: 33853140 PMCID: PMC8046590 DOI: 10.1055/s-0041-1723989] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 01/11/2021] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND In the United States, all 50 state governments deployed publicly viewable dashboards regarding the novel coronavirus disease 2019 (COVID-19) to track and respond to the pandemic. States dashboards, however, reflect idiosyncratic design practices based on their content, function, and visual design and platform. There has been little guidance for what state dashboards should look like or contain, leading to significant variation. OBJECTIVES The primary objective of our study was to catalog how information, system function, and user interface were deployed across the COVID-19 state dashboards. Our secondary objective was to group and characterize the dashboards based on the information we collected using clustering analysis. METHODS For preliminary data collection, we developed a framework to first analyze two dashboards as a group and reach agreement on coding. We subsequently doubled coded the remaining 48 dashboards using the framework and reviewed the coding to reach total consensus. RESULTS All state dashboards included maps and graphs, most frequently line charts, bar charts, and histograms. The most represented metrics were total deaths, total cases, new cases, laboratory tests, and hospitalization. Decisions on how metrics were aggregated and stratified greatly varied across dashboards. Overall, the dashboards were very interactive with 96% having at least some functionality including tooltips, zooming, or exporting capabilities. For visual design and platform, we noted that the software was dominated by a few major organizations. Our cluster analysis yielded a six-cluster solution, and each cluster provided additional insights about how groups of states engaged in specific practices in dashboard design. CONCLUSION Our study indicates that states engaged in dashboard practices that generally aligned with many of the goals set forth by the Centers for Disease Control and Prevention, Essential Public Health Services. We highlight areas where states fall short of these expectations and provide specific design recommendations to address these gaps.
Collapse
Affiliation(s)
- Naleef Fareed
- CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
- Department of Biomedical Informatics, College of Medicine, Institute for Behavioral Medicine Research, The Ohio State University, Columbus, Ohio, United States
| | - Christine M. Swoboda
- CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Sarah Chen
- CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
| | - Evelyn Potter
- Department of Biochemistry, Ohio University, Athens, Ohio, United States
| | - Danny T. Y. Wu
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, Ohio, United States
| | - Cynthia J. Sieck
- CATALYST—The Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, Ohio, United States
- Department of Family and Community Medicine, College of Medicine, Institute for Behavioral Medicine Research, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
5
|
Neth H, Gradwohl N, Streeb D, Keim DA, Gaissmaier W. Perspectives on the 2 × 2 Matrix: Solving Semantically Distinct Problems Based on a Shared Structure of Binary Contingencies. Front Psychol 2021; 11:567817. [PMID: 33633620 PMCID: PMC7901600 DOI: 10.3389/fpsyg.2020.567817] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Accepted: 12/21/2020] [Indexed: 11/17/2022] Open
Abstract
Cognition is both empowered and limited by representations. The matrix lens model explicates tasks that are based on frequency counts, conditional probabilities, and binary contingencies in a general fashion. Based on a structural analysis of such tasks, the model links several problems and semantic domains and provides a new perspective on representational accounts of cognition that recognizes representational isomorphs as opportunities, rather than as problems. The shared structural construct of a 2 × 2 matrix supports a set of generic tasks and semantic mappings that provide a unifying framework for understanding problems and defining scientific measures. Our model's key explanatory mechanism is the adoption of particular perspectives on a 2 × 2 matrix that categorizes the frequency counts of cases by some condition, treatment, risk, or outcome factor. By the selective steps of filtering, framing, and focusing on specific aspects, the measures used in various semantic domains negotiate distinct trade-offs between abstraction and specialization. As a consequence, the transparent communication of such measures must explicate the perspectives encapsulated in their derivation. To demonstrate the explanatory scope of our model, we use it to clarify theoretical debates on biases and facilitation effects in Bayesian reasoning and to integrate the scientific measures from various semantic domains within a unifying framework. A better understanding of problem structures, representational transparency, and the role of perspectives in the scientific process yields both theoretical insights and practical applications.
Collapse
Affiliation(s)
- Hansjörg Neth
- Social Psychology and Decision Sciences, Department of Psychology, University of Konstanz, Konstanz, Germany
| | - Nico Gradwohl
- Social Psychology and Decision Sciences, Department of Psychology, University of Konstanz, Konstanz, Germany
| | - Dirk Streeb
- Data Analysis and Visualization, Department of Computer Science, University of Konstanz, Konstanz, Germany
| | - Daniel A. Keim
- Data Analysis and Visualization, Department of Computer Science, University of Konstanz, Konstanz, Germany
| | - Wolfgang Gaissmaier
- Social Psychology and Decision Sciences, Department of Psychology, University of Konstanz, Konstanz, Germany
| |
Collapse
|
6
|
Karer B, Hagen H, Lehmann DJ. Insight Beyond Numbers: The Impact of Qualitative Factors on Visual Data Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1011-1021. [PMID: 33108287 DOI: 10.1109/tvcg.2020.3030376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As of today, data analysis focuses primarily on the findings to be made inside the data and concentrates less on how those findings relate to the domain of investigation. Contemporary visualization as a field of research shows a strong tendency to adopt this data-centrism. Despite their decisive influence on the analysis result, qualitative aspects of the analysis process such as the structure, soundness, and complexity of the applied reasoning strategy are rarely discussed explicitly. We argue that if the purpose of visualization is the provision of domain insight rather than the depiction of data analysis results, a holistic perspective requires a qualitative component to to be added to the discussion of quantitative and human factors. To support this point, we demonstrate how considerations of qualitative factors in visual analysis can be applied to obtain explanations and possible solutions for a number of practical limitations inherent to the data-centric perspective on analysis. Based on this discussion of what we call qualitative visual analysis, we develop an inside-outside principle of nested levels of context that can serve as a conceptual basis for the development of visualization systems that optimally support the emergence of insight during analysis.
Collapse
|