1
|
Cook R, Haydon HM, Thomas EE, Ward EC, Ross JA, Webb C, Harris M, Hartley C, Burns CL, Vivanti AP, Carswell P, Caffery LJ. Digital divide or digital exclusion? Do allied health professionals' assumptions drive use of telehealth? J Telemed Telecare 2023:1357633X231189846. [PMID: 37543369 DOI: 10.1177/1357633x231189846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2023]
Abstract
INTRODUCTION Telehealth use within allied health services currently lacks structure and consistency, ultimately affecting who can, and cannot, access services. This study aimed to investigate the factors influencing allied health professionals' (AHP) selection of consumers and appointments for telehealth. METHODS This study was conducted across 16 allied health departments from four Australian hospitals. Semi-structured focus groups were conducted with 58 AHPs. Analysis was underpinned by Qualitative Description methodology with inductive coding guided by Braun and Clarke's thematic analysis approach. RESULTS Six themes were identified that influenced AHPs' evaluation of telehealth suitability and selection of consumers. These included the following: (1) ease, efficiency and comfort of telehealth for clinicians; (2) clear benefits of telehealth for the consumer, yet the consumers were not always given the choice; (3) consumers' technology access and ability; (4) establishing and maintaining effective therapeutic relationships via telehealth; (5) delivering clinically appropriate and effective care via telehealth; and (6) external influences on telehealth service provision. A further theme of 'assumption versus reality' was noted to pervade all six themes. DISCUSSION Clinicians remain the key decision makers for whether telehealth is offered within allied health services. Ease and efficiency of use is a major driver in AHP's willingness to use telehealth. Assumptions and pre-conceived frames-of-reference often underpin decisions to not offer telehealth and present major barriers to telehealth adoption. The development of evidence-based, decision-support frameworks that engage the consumer and clinician in determining when telehealth is used is required. Services need to actively pursue joint decision-making between the clinician and consumer about service delivery preferences.
Collapse
Affiliation(s)
- Renee Cook
- Centre for Online Health, The University of Queensland, Brisbane, Australia
- Centre for Health Services Research, The University of Queensland, Brisbane, Australia
- Centre for Functioning and Health Research (CFAHR), Metro South Health, Brisbane, Australia
- Speech Pathology Department, Princess Alexandra Hospital, Metro South Health, Brisbane, Australia
| | - Helen M Haydon
- Centre for Online Health, The University of Queensland, Brisbane, Australia
- Centre for Health Services Research, The University of Queensland, Brisbane, Australia
| | - Emma E Thomas
- Centre for Online Health, The University of Queensland, Brisbane, Australia
- Centre for Health Services Research, The University of Queensland, Brisbane, Australia
| | - Elizabeth C Ward
- Centre for Functioning and Health Research (CFAHR), Metro South Health, Brisbane, Australia
- School of Health & Rehabilitation Sciences, The University of Queensland, Brisbane, Australia
| | - Julie-Anne Ross
- Allied Health, Princess Alexandra Hospital, Metro South Health, Brisbane, Australia
| | - Clare Webb
- Allied Health, Queen Elizabeth II Jubilee Hospital, Metro South Health, Brisbane, Australia
| | - Michael Harris
- Allied Health, Bayside Health Service, Metro South Health, Brisbane, Australia
| | - Carina Hartley
- Allied Health, Logan Hospital, Metro South Health, Brisbane, Australia
| | - Clare L Burns
- School of Health & Rehabilitation Sciences, The University of Queensland, Brisbane, Australia
- Speech Pathology Department, Royal Brisbane & Women's Hospital, Metro North Health, Brisbane, Australia
| | - Angela P Vivanti
- Allied Health, Princess Alexandra Hospital, Metro South Health, Brisbane, Australia
- School of Human Movement and Nutrition Studies, The University of Queensland, Brisbane, Australia
| | - Phillip Carswell
- Consumer Advisor, Princess Alexandra Hospital, Metro South Health, Brisbane, Australia
| | - Liam J Caffery
- Centre for Online Health, The University of Queensland, Brisbane, Australia
- Centre for Health Services Research, The University of Queensland, Brisbane, Australia
| |
Collapse
|
2
|
Chao YS, Wu CJ, Lai YC, Hsu HT, Cheng YP, Wu HC, Huang SY, Chen WC. Why Mental Illness Diagnoses Are Wrong: A Pilot Study on the Perspectives of the Public. Front Psychiatry 2022; 13:860487. [PMID: 35573385 PMCID: PMC9098926 DOI: 10.3389/fpsyt.2022.860487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 03/14/2022] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Mental illness diagnostic criteria are made based on assumptions. This pilot study aims to assess the public's perspectives on mental illness diagnoses and these assumptions. METHODS An anonymous survey with 30 questions was made available online in 2021. Participants were recruited via social media, and no personal information was collected. Ten questions focused on participants' perceptions regarding mental illness diagnoses, and 20 questions related to the assumptions of mental illness diagnoses. The participants' perspectives on these assumptions held by professionals were assessed. RESULTS Among 14 survey participants, 4 correctly answered the relationships of 6 symptom pairs (28.57%). Two participants could not correctly conduct the calculations involved in mood disorder diagnoses (14.29%). Eleven (78.57%) correctly indicated that 2 or more sets of criteria were available for single diagnoses of mental illnesses. Only 1 (7.14%) correctly answered that the associations between symptoms and diagnoses were supported by including symptoms in the diagnostic criteria of the diagnoses. Nine (64.29%) correctly answered that the diagnosis variances were not fully explained by their symptoms. The confidence of participants in the major depressive disorder diagnosis and the willingness to take medications for this diagnosis were the same (mean = 5.50, standard deviation [SD] = 2.31). However, the confidence of participants in the symptom-based diagnosis of non-solid brain tumor was significantly lower (mean = 1.62, SD = 2.33, p < 0.001). CONCLUSION Our study found that mental illness diagnoses are wrong from the perspectives of the public because our participants did not agree with all the assumptions professionals make about mental illness diagnoses. Only a minority of our participants obtained correct answers to the calculations involved in mental illness diagnoses. In the literature, neither patients nor the public have been engaged in formulating the diagnostic criteria of mental illnesses.
Collapse
Affiliation(s)
| | - Chao-Jung Wu
- Département d'Informatique, Université du Québec à Montréal, Montréal, QC, Canada
| | - Yi-Chun Lai
- National Yang Ming Chiao Tung University Hospital, Yilan, Taiwan
| | | | | | - Hsing-Chien Wu
- National Taiwan University Hospital, New Taipei City, Taiwan
| | - Shih-Yu Huang
- Department of Anesthesiology, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan.,Department of Anesthesiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Wei-Chih Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Emergency and Critical Care Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
3
|
Luyten J, van Hoek AJ. Integrating Alternative Social Value Judgments Into Cost-Effectiveness Analysis of Vaccines: An Application to Varicella-Zoster Virus Vaccination. Value Health 2021; 24:41-49. [PMID: 33431152 DOI: 10.1016/j.jval.2020.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 06/24/2020] [Accepted: 07/16/2020] [Indexed: 06/12/2023]
Abstract
OBJECTIVES Cost-effectiveness analyses (CEA) are based on the value judgment that health outcomes (eg, quantified in quality-adjusted life-years; QALYs) are all equally valuable irrespective of their context. Whereas most published CEAs perform extensive sensitivity analysis on various parameters and assumptions, only rarely is the influence of the QALY-equivalence assumption on cost-effectiveness results investigated. We illustrate how the integration of alternative social value judgments in CEA can be a useful form of sensitivity analysis. METHODS Because varicella-zoster virus (VZV) vaccination affects 2 distinct diseases (varicella zoster and herpes zoster) and likely redistributes infections across different age groups, the program has an important equity dimension. We used a cost-effectiveness model and disentangled the share of direct protection and herd immunity within the total projected QALYs resulting from a 50-year childhood VZV program in the UK. We use the UK population's preferences for QALYs in the vaccine context to revalue QALYs accordingly. RESULTS Revaluing different types of QALYs for different age groups in line with public preferences leads to a 98% change in the projected net impact of the program. The QALYs gained among children through direct varicella protection become more important, whereas the QALYs lost indirectly through zoster in adults diminish in value. Weighting of vaccine-related side effects made a large difference. CONCLUSIONS Our study shows that a sensitivity analysis in which alternative social value judgments about the value of health outcomes are integrated into CEA of vaccines is relatively straightforward and provides important additional information for decision makers to interpret cost-effectiveness results.
Collapse
Affiliation(s)
- Jeroen Luyten
- Leuven Institute for Healthcare Policy, KULeuven, Kapucijnenvoer 35, 3000 Leuven, Belgium; Personal Social Services Research Unit, Department of Health Policy, London School of Economics, Houghton Street, London, England, United Kingdom
| | - Albert Jan van Hoek
- Department of Infectious Disease Epidemiology, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, Keppel Street, London, England, United Kingdom; Centre for Infectious Diseases, National Institute for Public Health and the Environment, Antonie van Leeuwenhoeklaan, Bilthoven, The Netherlands.
| |
Collapse
|
4
|
Kahale LA, Khamis AM, Diab B, Chang Y, Lopes LC, Agarwal A, Li L, Mustafa RA, Koujanian S, Waziry R, Busse JW, Dakik A, Hooft L, Guyatt GH, Scholten RJPM, Akl EA. Meta-Analyses Proved Inconsistent in How Missing Data Were Handled Across Their Included Primary Trials: A Methodological Survey. Clin Epidemiol 2020; 12:527-535. [PMID: 32547244 PMCID: PMC7266325 DOI: 10.2147/clep.s242080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Background How systematic review authors address missing data among eligible primary studies remains uncertain. Objective To assess whether systematic review authors are consistent in the way they handle missing data, both across trials included in the same meta-analysis, and with their reported methods. Methods We first identified 100 eligible systematic reviews that included a statistically significant meta-analysis of a patient-important dichotomous efficacy outcome. Then, we successfully retrieved 638 of the 653 trials included in these systematic reviews’ meta-analyses. From each trial report, we extracted statistical data used in the analysis of the outcome of interest to compare with the data used in the meta-analysis. First, we used these comparisons to classify the “analytical method actually used” for handling missing data by the systematic review authors for each included trial. Second, we assessed whether systematic reviews explicitly reported their analytical method of handling missing data. Third, we calculated the proportion of systematic reviews that were consistent in their “analytical method actually used” across trials included in the same meta-analysis. Fourth, among systematic reviews that were consistent in the “analytical method actually used” across trials and explicitly reported on a method for handling missing data, we assessed whether the “analytical method actually used” and the reported methods were consistent. Results We were unable to determine the “analytical method reviews actually used” for handling missing outcome data among 397 trials. Among the remaining 241, systematic review authors most commonly conducted “complete case analysis” (n=128, 53%) or assumed “none of the participants with missing data had the event of interest” (n=58, 24%). Only eight of 100 systematic reviews were consistent in their approach to handling missing data across included trials, but none of these reported methods for handling missing data. Among seven reviews that did explicitly report their analytical method of handling missing data, only one was consistent in their approach across included trials (using complete case analysis), and their approach was inconsistent with their reported methods (assumed all participants with missing data had the event). Conclusion The majority of systematic review authors were inconsistent in their approach towards reporting and handling missing outcome data across eligible primary trials, and most did not explicitly report their methods to handle missing data. Systematic review authors should clearly identify missing outcome data among their eligible trials, specify an approach for handling missing data in their analyses, and apply their approach consistently across all primary trials.
Collapse
Affiliation(s)
- Lara A Kahale
- Clinical Research Institute, American University of Beirut, Beirut, Lebanon
| | - Assem M Khamis
- Wolfson Palliative Care Research Centre, Hull York Medical School, University of Hull, Hull, UK
| | - Batoul Diab
- Clinical Research Institute, American University of Beirut, Beirut, Lebanon
| | - Yaping Chang
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada
| | - Luciane Cruz Lopes
- Pharmaceutical Sciences Post Graduate Course, University of Sorocaba, UNISO, Sorocaba, Sao Paulo, Brazil
| | - Arnav Agarwal
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada.,Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Ling Li
- Chinese Evidence-Based Medicine Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Reem A Mustafa
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada.,Departments of Medicine and Biomedical & Health Informatics, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Serge Koujanian
- Department of Evaluative Clinical Sciences, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
| | - Reem Waziry
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Jason W Busse
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada.,Department of Anesthesia, McMaster University, Hamilton, Canada.,The Michael G. DeGroote National Pain Centre, McMaster University, Hamilton, Canada.,Chronic Pain Centre of Excellence for Canadian Veterans, Hamilton, Canada
| | - Abeer Dakik
- Clinical Research Institute, American University of Beirut, Beirut, Lebanon
| | - Lotty Hooft
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Gordon H Guyatt
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada.,Department of Medicine, McMaster University, Hamilton, Canada
| | - Rob J P M Scholten
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Elie A Akl
- Clinical Research Institute, American University of Beirut, Beirut, Lebanon.,Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada
| |
Collapse
|
5
|
Ooi QX, Wright DFB, Isbister GK, Duffull SB. Evaluation of Assumptions Underpinning Pharmacometric Models. AAPS J 2019; 21:97. [PMID: 31385119 DOI: 10.1208/s12248-019-0366-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 07/09/2019] [Indexed: 11/30/2022]
Abstract
Assumptions inherent to pharmacometric model development and use are not routinely acknowledged, described, or evaluated. The aim of this work is to present a framework for systematic evaluation of assumptions. To aid identification of assumptions, we categorise assumptions into two types: implicit and explicit assumptions. Implicit assumptions are inherent in a method or model and underpin its derivation and use. Explicit assumptions arise from heuristic principles and are typically defined by the user to enable the application of a method or model. A flowchart was developed for systematic evaluation of assumptions. For each assumption, the impact of assumption violation ('significant', 'insignificant', 'unknown') and the probability of assumption violation ('likely', 'unlikely', 'unknown') will be evaluated based on prior knowledge or the result of an additional bespoke study to arrive at a decision ('go', 'no-go') for both model building and model use. A table of assumptions with standardised headings has been proposed to facilitate the documentation of assumptions and evaluation of results. The utility of the proposed framework was illustrated using four assumptions underpinning a top-down model describing the warfarin-coagulation proteins' relationship. The next step of this work is to apply the framework to a series of other settings to fully assess its practicality and its value in identifying and making inferences from assumptions.
Collapse
Affiliation(s)
- Qing-Xi Ooi
- School of Pharmacy, University of Otago, 63 Hanover Street, Dunedin, 9016, New Zealand.
| | - Daniel F B Wright
- School of Pharmacy, University of Otago, 63 Hanover Street, Dunedin, 9016, New Zealand
| | - Geoffrey K Isbister
- School of Medicine and Public Health, University of Newcastle, Newcastle, NSW, Australia
| | - Stephen B Duffull
- School of Pharmacy, University of Otago, 63 Hanover Street, Dunedin, 9016, New Zealand
| |
Collapse
|
6
|
Abstract
Phylogenetic comparative methods are becoming increasingly popular for investigating evolutionary patterns and processes. However, these methods are not infallible - they suffer from biases and make assumptions like all other statistical methods.Unfortunately, although these limitations are generally well known in the phylogenetic comparative methods community, they are often inadequately assessed in empirical studies leading to misinterpreted results and poor model fits. Here, we explore reasons for the communication gap dividing those developing new methods and those using them.We suggest that some important pieces of information are missing from the literature and that others are difficult to extract from long, technical papers. We also highlight problems with users jumping straight into software implementations of methods (e.g. in r) that may lack documentation on biases and assumptions that are mentioned in the original papers.To help solve these problems, we make a number of suggestions including providing blog posts or videos to explain new methods in less technical terms, encouraging reproducibility and code sharing, making wiki-style pages summarising the literature on popular methods, more careful consideration and testing of whether a method is appropriate for a given question/data set, increased collaboration, and a shift from publishing purely novel methods to publishing improvements to existing methods and ways of detecting biases or testing model fit. Many of these points are applicable across methods in ecology and evolution, not just phylogenetic comparative methods.
Collapse
Affiliation(s)
- Natalie Cooper
- School of Natural Sciences Trinity College Dublin Dublin 2 Ireland; Department of Life Sciences Natural History Museum Cromwell Road London SW7 5BD UK
| | - Gavin H Thomas
- Department of Animal and Plant Sciences University of Sheffield Sheffield S10 2TN UK
| | - Richard G FitzJohn
- Department of Biological Sciences Macquarie University Sydney NSW 2109 Australia
| |
Collapse
|
7
|
Casson RJ, Farmer LDM. Understanding and checking the assumptions of linear regression: a primer for medical researchers. Clin Exp Ophthalmol 2014; 42:590-6. [PMID: 24801277 DOI: 10.1111/ceo.12358] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2014] [Accepted: 04/26/2014] [Indexed: 11/28/2022]
Abstract
Linear regression (LR) is a powerful statistical model when used correctly. Because the model is an approximation of the long-term sequence of any event, it requires assumptions to be made about the data it represents in order to remain appropriate. However, these assumptions are often misunderstood. We present the basic assumptions used in the LR model and offer a simple methodology for checking if they are satisfied prior to its use. In doing so, we aim to increase the effectiveness and appropriateness of LR in clinical research.
Collapse
Affiliation(s)
- Robert J Casson
- South Australian Institute of Ophthalmology, University of Adelaide, Adelaide, South Australia, Australia; Discipline of Ophthalmology & Visual Sciences, University of Adelaide, Adelaide, South Australia, Australia; Sight for All, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | | |
Collapse
|