1
|
Agley J, Mumaw C, Johnson B. Rationale and Study Checklist for Ethical Rejection of Participants on Crowdsourcing Research Platforms. Ethics Hum Res 2024; 46:38-46. [PMID: 38944883 DOI: 10.1002/eahr.500217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Online participant recruitment ("crowdsourcing") platforms are increasingly being used for research studies. While such platforms can rapidly provide access to large samples, there are concomitant concerns around data quality. Researchers have studied and demonstrated means to reduce the prevalence of low-quality data from crowdsourcing platforms, but approaches to doing so often involve rejecting work and/or denying payment to participants, which can pose ethical dilemmas. We write this essay as an associate professor and two institutional review board (IRB) directors to provide a perspective on the competing interests of participants/workers and researchers and to propose a checklist of steps that we believe may support workers' agency on the platform and lessen instances of unfair consequences to them while enabling researchers to definitively reject lower-quality work that might otherwise reduce the likelihood of their studies producing true results. We encourage further, explicit discussion of these issues among academics and among IRBs.
Collapse
Affiliation(s)
- Jon Agley
- Associate professor in the Department of Applied Health Science at the School of Public Health at Indiana University Bloomington and the deputy director of research for Prevention Insights at the School of Public Health at Indiana University Bloomington
| | - Casey Mumaw
- Assistant director for the Human Research Protection Program at Indiana University
| | - Bethany Johnson
- University director of the Human Research Protection Program at Indiana University
| |
Collapse
|
2
|
Sohi I, Shield KD, Rehm J, Monteiro M. Digital interventions for reducing alcohol use in general populations: An updated systematic review and meta-analysis. ALCOHOL, CLINICAL & EXPERIMENTAL RESEARCH 2023; 47:1813-1832. [PMID: 37864535 DOI: 10.1111/acer.15175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 08/03/2023] [Accepted: 08/11/2023] [Indexed: 10/23/2023]
Abstract
This article updates a 2017 review on the effectiveness of digital interventions for reducing alcohol use in the general population. An updated systematic search of the MEDLINE database was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses criteria to identify randomized controlled trials (RCTs) published from January 2017 to June 2022 that evaluated the effectiveness of digital interventions compared with no interventions, minimal interventions, and face-to-face interventions aimed at reducing alcohol use in the general population and, that also reported changes in alcohol use (quantity, frequency, quantity per drinking day, heavy episodic drinking (HED), or alcohol use disorders identification test (AUDIT) scores). A secondary analysis was performed that analyzed data from RCTs conducted in students. The review was not preregistered. The search produced 2224 articles. A total of 80 studies were included in the review, 35 of which were published after the last systematic review. A total of 66, 20, 18, 26, and 9 studies assessed the impact of digital interventions on alcohol quantity, frequency, quantity per drinking day, HED, and AUDIT scores, respectively. Individuals randomized to the digital interventions drank 4.12 (95% confidence interval (CI): 2.88, 5.36) fewer grams of alcohol per day, had 0.17 (95% CI 0.06, 0.29) fewer drinking days per week, drank approximately 3.89 (95% CI: 0.40, 7.38) fewer grams of alcohol per drinking day, had 1.11 (95% CI: 0.32, 1.91) fewer HED occasions per month, and had an AUDIT score 3.04 points lower (95% CI: 2.23, 3.85) than individuals randomized to the control condition. Significant reductions in alcohol quantity, frequency, and HED, but not quantity per drinking day, were observed among students. Digital interventions show potential for reducing alcohol use in general populations and could be used widely at the population level to reduce alcohol-attributable harms.
Collapse
Affiliation(s)
- Ivneet Sohi
- Centre for Addiction and Mental Health, Institute for Mental Health Policy Research, Toronto, Ontario, Canada
| | - Kevin D Shield
- Centre for Addiction and Mental Health, Institute for Mental Health Policy Research, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Centre for Addiction and Mental Health, Campbell Family Mental Health Research Institute, Toronto, Ontario, Canada
| | - Jürgen Rehm
- Centre for Addiction and Mental Health, Institute for Mental Health Policy Research, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Centre for Addiction and Mental Health, Campbell Family Mental Health Research Institute, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Center for Clinical Epidemiology and Longitudinal Studies, Institute of Clinical Psychology and Psychotherapy, Technische Universität Dresden, Dresden, Germany
- Department of International Health Projects, Institute for Leadership and Health Management, I.M. Sechenov First Moscow State Medical University, Moscow, Russia
| | | |
Collapse
|
3
|
Kolc KL, Tan YXK, Lo AZY, Shvetcov A, Mitchell PB, Perkes IE. Measuring psychiatric symptoms online: A systematic review of the use of inventories on Amazon Mechanical Turk (mTurk). J Psychiatr Res 2023; 163:118-126. [PMID: 37209617 DOI: 10.1016/j.jpsychires.2023.05.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 02/20/2023] [Accepted: 05/01/2023] [Indexed: 05/22/2023]
Abstract
Symptom measurement in psychiatric research increasingly uses digitized self-report inventories and is turning to crowdsourcing platforms for recruitment, e.g., Amazon Mechanical Turk (mTurk). The impact of digitizing pencil-and-paper inventories on the psychometric properties is underexplored in mental health research. Against this background, numerous studies report high prevalence estimates of psychiatric symptoms in mTurk samples. Here we develop a framework to evaluate the online implementation of psychiatric symptom inventories relative to two domains, that is, the adherence to (i) validated scoring and (ii) standardized administration. We apply this new framework to the online use of the Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder-7 (GAD-7), and Alcohol Use Disorder Identification Test (AUDIT). Our systematic review of the literature identified 36 implementations of these three inventories on mTurk across 27 publications. We also evaluated methodological approaches to enhance data quality, e.g., the use of bot detection and attention check items. Of the 36 implementations, 23 reported the applied diagnostic scoring criteria and only 18 reported the specified symptom timeframe. None of the 36 implementations reported adaptations made in their digitization of the inventories. While recent reports attribute higher rates of mood, anxiety, and alcohol use disorders on mTurk to data quality, our findings indicate that this inflation may also relate to the assessment methods. We provide recommendations to enhance both data quality and fidelity to validated administration and scoring methods.
Collapse
Affiliation(s)
- Kristy L Kolc
- School of Psychology, Faculty of Science, University of New South Wales, Sydney, Australia; Discipline of Psychiatry and Mental Health, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Sydney, Australia.
| | - Yue Xuan Karen Tan
- Discipline of Psychiatry and Mental Health, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Sydney, Australia
| | - Alys Z Y Lo
- Discipline of Psychiatry and Mental Health, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Sydney, Australia
| | - Artur Shvetcov
- Discipline of Psychiatry and Mental Health, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Sydney, Australia; Black Dog Institute, Sydney, Australia
| | - Philip B Mitchell
- Discipline of Psychiatry and Mental Health, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Sydney, Australia
| | - Iain E Perkes
- School of Psychology, Faculty of Science, University of New South Wales, Sydney, Australia; Discipline of Psychiatry and Mental Health, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Sydney, Australia; Discipline of Paediatrics and Child Health, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Sydney, Australia; Department of Psychological Medicine, Sydney Children's Hospital Network, Sydney, Australia
| |
Collapse
|
4
|
Strickland JC, Bolin BL, Marks KR. (Non-) impact of task experience on behavioral economic decision-making. Exp Clin Psychopharmacol 2022; 30:338-350. [PMID: 33617281 PMCID: PMC8610096 DOI: 10.1037/pha0000438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Behavioral economic research has been widely conducted via crowdsourcing resources to evaluate novel task designs or pilot interventions. One under recognized and yet-to-be tested concern is the impact of non-naïvety (i.e., prior task exposure) on behavioral economic task performance. We evaluated the influence of non-naïvety on task performance in two popular areas of behavioral economic research: behavioral economic demand and delay discounting. Participants (N = 485) recruited using Amazon Mechanical Turk (mTurk) completed alcohol and soda purchase tasks and delay discounting tasks for monetary and alcohol outcomes. Equivalence of responding and effect sizes with clinical variables were compared based on prior task experience. Over one quarter of participants reported demand task experience (26.9%) and nearly half endorsed delay discounting task experience (48.6%). Statistically equivalent responding was observed for alcohol purchase task data with less-than-small effect size differences based on task experience (d = 0.01-0.13). Similar results were observed for a soda purchase task thereby supporting generalization to a non-alcohol commodity. Measures of convergent and discriminant validity for behavioral economic demand indicated medium-to-large and stimulus-specific effect sizes with little variation based on prior task exposure. Delay discounting for money and alcohol showed some sensitivity to prior task experience (i.e., less steep discounting for non-naïve participants), however these effects were attenuated after accounting for group differences in alcohol use. These findings support the fidelity of behavioral economic task outcomes and emphasize that participant non-naïvety in crowdsourcing settings may minimally impact performance on behavioral economic assays commonly used in behavioral and addiction science. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
5
|
Quality control questions on Amazon's Mechanical Turk (MTurk): A randomized trial of impact on the USAUDIT, PHQ-9, and GAD-7. Behav Res Methods 2021; 54:885-897. [PMID: 34357539 PMCID: PMC8344397 DOI: 10.3758/s13428-021-01665-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/01/2021] [Indexed: 11/08/2022]
Abstract
Crowdsourced psychological and other biobehavioral research using platforms like Amazon's Mechanical Turk (MTurk) is increasingly common - but has proliferated more rapidly than studies to establish data quality best practices. Thus, this study investigated whether outcome scores for three common screening tools would be significantly different among MTurk workers who were subject to different sets of quality control checks. We conducted a single-stage, randomized controlled trial with equal allocation to each of four study arms: Arm 1 (Control Arm), Arm 2 (Bot/VPN Check), Arm 3 (Truthfulness/Attention Check), and Arm 4 (Stringent Arm - All Checks). Data collection was completed in Qualtrics, to which participants were referred from MTurk. Subjects (n = 1100) were recruited on November 20-21, 2020. Eligible workers were required to claim U.S. residency, have a successful task completion rate > 95%, have completed a minimum of 100 tasks, and have completed a maximum of 10,000 tasks. Participants completed the US-Alcohol Use Disorders Identification Test (USAUDIT), the Patient Health Questionnaire (PHQ-9), and a screener for Generalized Anxiety Disorder (GAD-7). We found that differing quality control approaches significantly, meaningfully, and directionally affected outcome scores on each of the screening tools. Most notably, workers in Arm 1 (Control) reported higher scores than those in Arms 3 and 4 for all tools, and a higher score than workers in Arm 2 for the PHQ-9. These data suggest that the use, or lack thereof, of quality control questions in crowdsourced research may substantively affect findings, as might the types of quality control items.
Collapse
|
6
|
Actual versus perceived infection rates of COVID-19: Impact on distress, behavior and disability. J Psychiatr Res 2021; 137:525-533. [PMID: 33831818 PMCID: PMC8012838 DOI: 10.1016/j.jpsychires.2021.03.061] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 03/23/2021] [Accepted: 03/29/2021] [Indexed: 12/16/2022]
Abstract
OBJECTIVE Accurate threat appraisal is central to survival. In the case of the coronavirus pandemic, accurate threat appraisal is difficult due to incomplete medical knowledge as well as complex social factors (e.g., mixed public health messages). The purpose of this study was to evaluate the degree to which individuals accurately perceive COVID-19 infection rates and to explore the role of COVID-19 threat perception on emotional and behavioral responses both cross sectionally and prospectively. METHODS A community sample (N = 249) was assessed using online crowdsourcing and followed for one month. COVID-19 threat appraisal was compared with actual COVID-19 infection rates and deaths at the time of data collection in each participant's county and state. It was predicted that actual versus perceived COVID-19 infection rates would only be modestly associated. Relative to actual infection rates, perceived infection rates were hypothesized to be a better predictor of COVID-related behaviors, distress, and impairment. RESULTS Findings indicated that relative to actual infection, perceived infection was a better predictor of COVID-related outcomes cross sectionally and longitudinally. Interestingly, actual infection rates were negatively related to behaviors cross sectionally (e.g., less stockpiling). Prospectively, these variables interacted to predict avoidance behaviors over time such that the relationship between perceived infection and avoidance was stronger as actual infection increased. CONCLUSIONS These data suggest that perceived COVID-19 infection is significantly associated with COVID-related behaviors, distress and impairment whereas actual infection rates have a less important and perhaps even paradoxical influence on behavioral responses to the pandemic.
Collapse
|
7
|
Niforatos JD, Chaitoff A, Zheutlin AR, Feinstein MM, Raja AS. Barriers to emergency department usage during the COVID-19 pandemic. J Am Coll Emerg Physicians Open 2020; 1:1261-1268. [PMID: 33392530 PMCID: PMC7771795 DOI: 10.1002/emp2.12316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 10/21/2020] [Accepted: 10/23/2020] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE The objective of this study was to determine the public's likelihood of being willing to use an emergency department (ED) for urgent/emergent illness during the coronavirus disease 2019 (COVID-19) pandemic. METHODS An institutional review board-approved, cross-sectional survey of a non-probability sample from Amazon Mechanical Turk was administered May 24-25, 2020. Change in self-reported willingness to use an ED before and during the pandemic (primary outcome) was assessed via McNemar's test; COVID-19 knowledge and perceptions were secondary outcomes. RESULTS There were 855 survey participants (466 [54.5%] male; 699 [81.8%] White; median age 39). Proportion reporting likelihood to use the ED pre-pandemic (71% [604/855]) decreased significantly during the pandemic (49% [417/855]; P < 0.001); those unlikely to visit the ED increased significantly during the pandemic (41% [347/855] vs 22% [417/855], P < 0.001). Participants were unlikely to use the ED during the pandemic if they were unlikely to use it pre-pandemic (adjusted odds ratio, 4.55; 95% confidence interval, 3.09-6.7) or correctly answered more COVID-19 knowledge questions (adjusted odds ratio, 1.37; 95% confidence interval, 1.17-1.60). Furthermore, 23.4% (n = 200) of respondents believed the pandemic was not a serious threat to society. Respondents with higher COVID-19 knowledge scores were more likely to view the pandemic as serious (odds ratio, 1.57; 95% confidence interval, 1.36-1.82). CONCLUSIONS This survey study investigated the public's willingness to use the ED during the COVID-19 pandemic. Only 49% of survey respondents were willing to visit the ED during a pandemic if they felt ill compared with 71% before the pandemic.
Collapse
Affiliation(s)
- Joshua D. Niforatos
- Department of Emergency MedicineThe Johns Hopkins School of MedicineBaltimoreMarylandUSA
| | - Alexander Chaitoff
- Department of Internal MedicineBrigham and Women's HospitalBostonMassachusettsUSA
| | - Alexander R. Zheutlin
- Department of Internal MedicineUniversity of Utah School of MedicineSalt Lake CityUtahUSA
| | - Max M Feinstein
- Department of AnesthesiologyPerioperative and Pain MedicineIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Ali S. Raja
- Department of Emergency MedicineMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
8
|
Godinho A, Cunningham JA, Schell C. The particular case of conducting addiction intervention research on Mechanical Turk. Addiction 2020; 115:1971-1972. [PMID: 32427392 DOI: 10.1111/add.15097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 04/22/2020] [Indexed: 02/03/2023]
Affiliation(s)
- Alexandra Godinho
- Centre for Addiction and Mental Health, Institute of Mental Health and Policy Research, Toronto, Canada
| | - John A Cunningham
- Centre for Addiction and Mental Health, Institute of Mental Health and Policy Research, Toronto, Canada.,Department of Psychiatry, University of Toronto, Toronto, Canada.,Australian National University, Canberra, Australia
| | - Christina Schell
- Centre for Addiction and Mental Health, Institute of Mental Health and Policy Research, Toronto, Canada
| |
Collapse
|
9
|
Schell C, Godinho A, Cunningham JA. To thine own self, be true: Examining change in self-reported alcohol measures over time as related to socially desirable responding bias among people with unhealthy alcohol use. Subst Abus 2020; 42:87-93. [DOI: 10.1080/08897077.2019.1697998] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Christina Schell
- Institute of Mental Health and Policy Research, Centre for Addiction and Mental Health, Toronto, Canada
| | - Alexandra Godinho
- Institute of Mental Health and Policy Research, Centre for Addiction and Mental Health, Toronto, Canada
| | - John A. Cunningham
- Institute of Mental Health and Policy Research, Centre for Addiction and Mental Health, Toronto, Canada
- Department of Psychiatry, University of Toronto, Toronto, Canada
- ANU College of Health and Medicine, Australian National University, Canberra, Australia
| |
Collapse
|
10
|
Strickland JC, Victor GA. Leveraging crowdsourcing methods to collect qualitative data in addiction science: Narratives of non-medical prescription opioid, heroin, and fentanyl use. THE INTERNATIONAL JOURNAL OF DRUG POLICY 2020; 75:102587. [PMID: 31751813 PMCID: PMC6957729 DOI: 10.1016/j.drugpo.2019.10.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Revised: 10/17/2019] [Accepted: 10/24/2019] [Indexed: 01/04/2023]
Abstract
BACKGROUND Online crowdsourcing methods have proved useful for studies of diverse designs in the behavioral and addiction sciences. The remote and online setting of crowdsourcing research may provide easier access to unique participant populations and improved comfort for these participants in sharing sensitive health or behavioral information. To date, few studies have evaluated the use of qualitative research methods on crowdsourcing platforms and even fewer have evaluated the quality of data gathered. The purpose of the present analysis was to document the feasibility and validity of using crowdsourcing techniques for collecting qualitative data among people who use drugs. METHODS Participants (N = 60) with a history of non-medical prescription opioid use with transition to heroin or fentanyl use were recruited using Amazon Mechanical Turk (mTurk). A battery of qualitative questions was included indexing beliefs and behaviors surrounding opioid use, transition pathways to heroin and/or fentanyl use, and drug-related contacts with structural institutions (e.g., health care, criminal justice). RESULTS Qualitative data recruitment was feasible as evidenced by the rapid sampling of a relatively large number of participants from diverse geographic regions. Computerized text analysis indicated high ratings of authenticity for the provided narratives. These authenticity percentiles were higher than the average of general normative writing samples as well as than those collected in experimental settings. CONCLUSIONS These findings support the feasibility and quality of qualitative data collected in online settings, broadly, and crowdsourced settings, specifically. Future work among people who use drugs may leverage crowdsourcing methods and the access to hard-to-sample populations to complement existing studies in the human laboratory and clinic as well as those using other digital technology methods.
Collapse
Affiliation(s)
- Justin C Strickland
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, 5510 Nathan Shock Drive, Baltimore, MD 21224-6823, United States.
| | - Grant A Victor
- Center for Behavioral Health and Justice, Wayne State University School of Social Work, 5201 Cass Ave, Detroit, Michigan, 48202, United States
| |
Collapse
|
11
|
Niforatos JD, Chaitoff A, Mercer MB, Yu PC, Rose SL. Association Between Public Trust and Provider Specialty Among Physicians With Financial Conflicts of Interest. Mayo Clin Proc 2019; 94:2467-2475. [PMID: 31806100 DOI: 10.1016/j.mayocp.2019.07.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 07/12/2019] [Accepted: 07/22/2019] [Indexed: 01/07/2023]
Abstract
OBJECTIVES To characterize public perception of physicians' conflicts of interest (COIs) across medical and surgical specialties. PATIENTS AND METHODS A cross-sectional 6-arm randomized survey of a nonprobability sample from Amazon's Mechanical Turk occurred on December 11 to 16, 2018. Survey respondents were randomly assigned to vignettes that varied the physician specialty with COI. The primary outcome was mean difference in Mayer Trust, and the secondary outcome included the proportion who desire to discontinue care. RESULTS There were 1729 of 1920 respondents who completed the experiment (90.1% completion rate). Respondents were male (52.5%; n=907), white (71.4%; n=1234), and between the ages of 25 and 44 years (70.9%; n=1227). Mean ± SD Mayer Trust across the 6 specialties was 3.7±.60, with the only between-specialty differences observed for psychiatry compared with the other specialties (F=5.4; P<.001). The median dollar amount that would affect respondents' trust in a physician was $5000 (interquartile range, $100-$100,000). A total of 75.1% (n=1298) of respondents desired COI information, with 41.6% (n=720) discontinuing care. Age older than 34 years (adjusted odds ratio [aOR], 0.7; 95%, CI, 0.49-0.99; P=.047), nonwhite race (aOR, 1.3; 95% CI, 1.02-1.6; P=.03), educational attainment of 4 or more years of college (aOR, 1.31; 95% CI, 1.05-1.6; P=.016), and physician specialty as a psychiatrist (aOR, 1.5; 95% CI, 1.03-2.2; P=.034) were predictors for discontinuing care. CONCLUSION Public COI disclosure is a common method for managing financial conflicts. Although survey respondents were more likely to discontinue care with a physician with COI, they will act on this knowledge of COI differently depending on the specialty of the physician. The finding that psychiatry is an outlier may be a chance finding that warrants further confirmation. Continued efforts to ensure best practices for disclosure are required.
Collapse
Affiliation(s)
- Joshua D Niforatos
- Cleveland Clinic Lerner College of Medicine, Case Western Reserve University, OH; Department of Emergency Medicine, The Johns Hopkins Hospital/The Johns Hopkins School of Medicine, Baltimore, MD
| | - Alexander Chaitoff
- Cleveland Clinic Lerner College of Medicine, Case Western Reserve University, OH; Department of Internal Medicine, Brigham and Women's Hospital/Harvard Medical School, Boston, MA
| | | | - Pei-Chun Yu
- Department of Quantitative Health Sciences, Cleveland Clinic, OH
| | - Susannah L Rose
- Office of Patient Experience, Cleveland Clinic, OH; Office of Patient Experience, Center for Bioethics, Cleveland Clinic Lerner College of Medicine, Case Western Reserve University, Cleveland, OH.
| |
Collapse
|
12
|
Reuter K, Zhu Y, Angyan P, Le N, Merchant AA, Zimmer M. Public Concern About Monitoring Twitter Users and Their Conversations to Recruit for Clinical Trials: Survey Study. J Med Internet Res 2019; 21:e15455. [PMID: 31670698 PMCID: PMC6914244 DOI: 10.2196/15455] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 10/04/2019] [Accepted: 10/04/2019] [Indexed: 01/17/2023] Open
Abstract
Background Social networks such as Twitter offer the clinical research community a novel opportunity for engaging potential study participants based on user activity data. However, the availability of public social media data has led to new ethical challenges about respecting user privacy and the appropriateness of monitoring social media for clinical trial recruitment. Researchers have voiced the need for involving users’ perspectives in the development of ethical norms and regulations. Objective This study examined the attitudes and level of concern among Twitter users and nonusers about using Twitter for monitoring social media users and their conversations to recruit potential clinical trial participants. Methods We used two online methods for recruiting study participants: the open survey was (1) advertised on Twitter between May 23 and June 8, 2017, and (2) deployed on TurkPrime, a crowdsourcing data acquisition platform, between May 23 and June 8, 2017. Eligible participants were adults, 18 years of age or older, who lived in the United States. People with and without Twitter accounts were included in the study. Results While nearly half the respondents—on Twitter (94/603, 15.6%) and on TurkPrime (509/603, 84.4%)—indicated agreement that social media monitoring constitutes a form of eavesdropping that invades their privacy, over one-third disagreed and nearly 1 in 5 had no opinion. A chi-square test revealed a positive relationship between respondents’ general privacy concern and their average concern about Internet research (P<.005). We found associations between respondents’ Twitter literacy and their concerns about the ability for researchers to monitor their Twitter activity for clinical trial recruitment (P=.001) and whether they consider Twitter monitoring for clinical trial recruitment as eavesdropping (P<.001) and an invasion of privacy (P=.003). As Twitter literacy increased, so did people’s concerns about researchers monitoring Twitter activity. Our data support the previously suggested use of the nonexceptionalist methodology for assessing social media in research, insofar as social media-based recruitment does not need to be considered exceptional and, for most, it is considered preferable to traditional in-person interventions at physical clinics. The expressed attitudes were highly contextual, depending on factors such as the type of disease or health topic (eg, HIV/AIDS vs obesity vs smoking), the entity or person monitoring users on Twitter, and the monitored information. Conclusions The data and findings from this study contribute to the critical dialogue with the public about the use of social media in clinical research. The findings suggest that most users do not think that monitoring Twitter for clinical trial recruitment constitutes inappropriate surveillance or a violation of privacy. However, researchers should remain mindful that some participants might find social media monitoring problematic when connected with certain conditions or health topics. Further research should isolate factors that influence the level of concern among social media users across platforms and populations and inform the development of more clear and consistent guidelines.
Collapse
Affiliation(s)
- Katja Reuter
- Southern California Clinical and Translational Science Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States.,Institute for Health Promotion and Disease Prevention Research, Department of Preventive Medicine, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Yifan Zhu
- School of Information Studies, University of Wisconsin-Milwaukee, Milwaukee, WI, United States
| | - Praveen Angyan
- Southern California Clinical and Translational Science Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - NamQuyen Le
- Southern California Clinical and Translational Science Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | | | - Michael Zimmer
- Department of Computer Science, Marquette University, Milwaukee, WI, United States
| |
Collapse
|
13
|
Godinho A, Schell C, Cunningham JA. How one small text change in a study document can impact recruitment rates and follow-up completions. Internet Interv 2019; 18:100284. [PMID: 31890631 PMCID: PMC6926325 DOI: 10.1016/j.invent.2019.100284] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 10/02/2019] [Accepted: 10/04/2019] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND The validity and reliability of longitudinal research is highly dependent on the recruitment and retention of representative samples. Various strategies have been developed and tested for improving recruitment and follow-up rates into health-behavioural research, but few have examined the role of linguistic choices and study document readability on participation rates. This study examined the impact of one small text change, assigning an inappropriate or grade-8 reading level password for intervention access, on participation rates and attrition in an online alcohol intervention trial. METHODS Participants were recruited into an online alcohol intervention study using Amazon's Mechanical Turk via a multi-step recruitment process which required participants to log into a study portal using a pre-assigned password. Passwords were qualitatively coded as grade-8 and/or inappropriate for use within a professional setting. Separate logistic regressions examined which demographic, clinical characteristics, and password categorizations were most strongly associated with recruitment rates and follow-up completions. RESULTS Inappropriate passwords were a barrier for recruitment among participants with post-secondary education as compared to those with less education (p = 0.044), while grade-8 passwords appeared to significantly facilitate the completion of 6-month follow-ups (p = 0.005). CONCLUSIONS Altogether, these findings suggest that some linguistic choices may play an important role in recruitment, while others, such as readability, may have longer-term effects on follow-up rates and attrition. Possible explanations for the findings, as well as, sample selection biases during recruitment and follow-up are discussed. Limitations of the study are stated and recommendations for researchers are provided. TRIAL REGISTRATION ClinicalTrials.gov NCT02977026. Registered 27 Nov 2016.
Collapse
Affiliation(s)
- Alexandra Godinho
- Institute of Mental Health and Policy Research, Centre for Addiction and Mental Health, Toronto, Canada
| | - Christina Schell
- Institute of Mental Health and Policy Research, Centre for Addiction and Mental Health, Toronto, Canada
| | - John A. Cunningham
- Institute of Mental Health and Policy Research, Centre for Addiction and Mental Health, Toronto, Canada,Department of Psychiatry, University of Toronto, Toronto, Canada,Australian National University, Canberra, Australia,Corresponding author at: Centre for Addiction and Mental Health, 33 Russell St., Toronto, Ontario M5S 2S1, Canada.
| |
Collapse
|