1
|
Garbarski D, Dykema J, Schaeffer NC, Jones CP, Neman TS, Edwards DF. Factors Associated with Interviewers' Evaluations of Respondents' Performance in Telephone Interviews: Behavior, Response Quality Indicators, and Characteristics of Respondents and Interviewers. Public Opin Q 2023; 87:480-506. [PMID: 37705920 PMCID: PMC10496573 DOI: 10.1093/poq/nfad028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
Interviewers' postinterview evaluations of respondents' performance (IEPs) are paradata, used to describe the quality of the data obtained from respondents. IEPs are driven by a combination of factors, including respondents' and interviewers' sociodemographic characteristics and what actually transpires during the interview. However, relatively few studies examine how IEPs are associated with features of the response process, including facets of the interviewer-respondent interaction and patterns of responding that index data quality. We examine whether features of the response process-various respondents' behaviors and response quality indicators-are associated with IEPs in a survey with a diverse set of respondents focused on barriers and facilitators to participating in medical research. We also examine whether there are differences in IEPs across respondents' and interviewers' sociodemographic characteristics. Our results show that both respondents' behaviors and response quality indicators predict IEPs, indicating that IEPs reflect what transpires in the interview. In addition, interviewers appear to approach the task of evaluating respondents with differing frameworks, as evidenced by the variation in IEPs attributable to interviewers and associations between IEPs and interviewers' gender. Further, IEPs were associated with respondents' education and ethnoracial identity, net of respondents' behaviors, response quality indicators, and sociodemographic characteristics of respondents and interviewers. Future research should continue to build on studies that examine the correlates of IEPs to better inform whether, when, and how to use IEPs as paradata about the quality of the data obtained.
Collapse
Affiliation(s)
- Dana Garbarski
- Associate Professor, Department of Sociology, and Director, University Core Curriculum, Loyola University Chicago, Chicago, IL, US
| | - Jennifer Dykema
- Associate Professor, Department of Sociology, University of Wisconsin-Madison, Madison, WI, US; and Faculty Director, University of Wisconsin Survey Center, University of Wisconsin-Madison, Madison, WI, US
| | - Nora Cate Schaeffer
- Professor Emerita, Department of Sociology, University of Wisconsin-Madison, Madison, WI, US
| | - Cameron P Jones
- Graduate Student, Department of Statistics, University of Wisconsin-Madison, Madison, WI, US
| | - Tiffany S Neman
- PhD Candidate, Department of Sociology, University of Wisconsin-Madison, Madison, WI, US
| | - Dorothy Farrar Edwards
- Professor, Department of Kinesiology and Department of Medicine, University of Wisconsin-Madison, Madison, WI, US
| |
Collapse
|
2
|
Dykema J, Jones CP, Garbarski D, Farias M, Edwards DF. Exploring the Relationship between Medical Research Literacy and Respondents' Expressed Likelihood to Participate in a Clinical Trial. Int J Environ Res Public Health 2022; 19:15168. [PMID: 36429884 PMCID: PMC9690895 DOI: 10.3390/ijerph192215168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 11/11/2022] [Accepted: 11/13/2022] [Indexed: 06/16/2023]
Abstract
Medical research literacy (MRL) is a facet of health literacy that measures a person's understanding of informed consent and other aspects of participation in medical research. While existing research on MRL is limited, there are reasons to believe MRL may be associated with a willingness to participate in medical research. We use data from a racially balanced sample of survey respondents (n = 410): (1) to analyze how MRL scores vary by respondents' socio-demographic characteristics; (2) to examine how MRL relates to respondents' expressed likelihood to participate in a clinical trial; and (3) to provide considerations on the measurement of MRL. The results indicate no differences in MRL scores by race or gender; younger (p < 0.05) and more educated (p < 0.001) individuals have significantly higher MRL scores. Further, higher MRL scores are associated with significantly lower levels of expressed likelihood to participate in a clinical trial. Additionally, the MRL scale included both true and false statements, and analyses demonstrate significant differences in how these relate to outcomes. Altogether, the results signal that further research is needed to understand MRL and how it relates to socio-demographic characteristics associated with research participation and can be measured effectively.
Collapse
Affiliation(s)
- Jennifer Dykema
- Department of Sociology, University of Wisconsin-Madison, Madison, WI 53706, USA
- University of Wisconsin Survey Center, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Cameron P. Jones
- University of Wisconsin Survey Center, University of Wisconsin-Madison, Madison, WI 53706, USA
- Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Dana Garbarski
- Department of Sociology, Loyola University Chicago, Chicago, IL 60660, USA
| | - Mia Farias
- University of Wisconsin Survey Center, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Dorothy Farrar Edwards
- Departments of Kinesiology and Medicine, University of Wisconsin-Madison, Madison, WI 53706, USA
| |
Collapse
|
3
|
Sandgren EP, Streiffer R, Dykema J, Assad N, Moberg J. Influence of animal pain and distress on judgments of animal research justifiability among university undergraduate students and faculty. PLoS One 2022; 17:e0272306. [PMID: 35939500 PMCID: PMC9359541 DOI: 10.1371/journal.pone.0272306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 07/18/2022] [Indexed: 11/25/2022] Open
Abstract
Acceptance of animal research by the public depends on several characteristics of the specific experimental study. In particular, acceptance decreases as potential animal pain or distress increases. Our objective in this study was to quantify the magnitude of pain/distress that university undergraduate students and faculty would find to be justifiable in animal research, and to see how that justifiability varied according to the purpose of the research, or the species to which the animal belonged. We also evaluate how demographic characteristics of respondents may be associated with their opinions about justifiability. To accomplish this goal, we developed and administered a survey to students and faculty at the University of Wisconsin-Madison. Our survey employed Likert-style questions that asked them to designate the level of animal pain or distress that they felt was justifiable for each of the following six purposes—animal disease, human disease, basic research, human medicine, chemical testing, or cosmetic testing. These questions were asked about five different species of animals including monkeys, dogs/cats, pig/sheep, rats/mice, or small fish. We used the data to establish a purpose-specific pain/distress scale, a species-specific pain/distress scale, and a composite pain/distress scale that, for each respondent, averaged the extent of justifiable pain/distress across all purposes and species. For purpose, students were more likely to choose higher levels of pain for animal disease research, followed by human disease, basic research, human medicine, chemical testing, and cosmetic testing. Faculty were more likely to choose the same level of pain for the first four purposes, followed by lower levels of pain for chemical and cosmetic testing. For species, students were more likely to choose higher levels of pain for small fish and rats/mice (tied), pigs/sheep and monkeys (tied), than for dogs/cats. For faculty, order from least to most justifiable pain/distress was small fish, rats/mice, pigs/sheep, then dogs/cats and monkeys (the latter two tied). Interestingly, exploratory factor analysis of the pain/distress scales indicated that when it comes to justifying higher levels of pain and distress, respondents identified two distinct categories of purposes, chemical and cosmetic testing, for which respondents were less likely to justify higher levels of pain or distress as compared to other purposes; and two distinct categories of species, small fish and rats/mice, for which respondents were more likely to justify higher levels of pain/distress than other species. We found that the spread of acceptance of animal research was much smaller when survey questions included pain/distress compared to when only purpose or species were part of the question. Demographically, women, vegetarians/vegans, and respondents with no experience in animal research justified less animal pain/distress than their counterparts. Not surprisingly, a lower level of support for animal research in general was correlated with lower justifiability of pain/distress. Based on these findings, we discuss the role of animal pain/distress in regulatory considerations underlying decisions about whether to approve specific animal uses, and suggest ways to strengthen the ethical review and public acceptance of animal research.
Collapse
Affiliation(s)
- Eric P. Sandgren
- Pathobiololgical Sciences, School of Veterinary Medicine, University of Wisconsin-Madison, Madison, WI, United States of America
- * E-mail:
| | - Robert Streiffer
- Medical History and Bioethics, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Jennifer Dykema
- University of Wisconsin-Madison Survey Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Nadia Assad
- University of Wisconsin-Madison Survey Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Jackson Moberg
- University of Wisconsin-Madison Survey Center, University of Wisconsin-Madison, Madison, WI, United States of America
| |
Collapse
|
4
|
Dykema J, Schaeffer NC, Garbarski D, Assad N, Blixt S. Towards a reconsideration of the use of agree-disagree questions in measuring subjective evaluations. Res Social Adm Pharm 2022; 18:2335-2344. [PMID: 34253471 PMCID: PMC8692311 DOI: 10.1016/j.sapharm.2021.06.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 05/06/2021] [Accepted: 06/20/2021] [Indexed: 02/03/2023]
Abstract
Agree-disagree (AD) or Likert questions (e.g., "I am extremely satisfied: strongly agree … strongly disagree") are among the most frequently used response formats to measure attitudes and opinions in the social and medical sciences. This review and research synthesis focuses on the measurement properties and potential limitations of AD questions. The research leads us to advocate for an alternative questioning strategy in which items are written to directly ask about their underlying response dimensions using response categories tailored to match the response dimension, which we refer to as item-specific (IS) (e.g., "How satisfied are you: not at all … extremely"). In this review we: 1) synthesize past research comparing data quality for AD and IS questions; 2) present conceptual models of and review research supporting respondents' cognitive processing of AD and IS questions; and 3) provide an overview of question characteristics that frequently differ between AD and IS questions and may affect respondents' cognitive processing and data quality. Although experimental studies directly comparing AD and IS questions yield some mixed results, more studies find IS questions are associated with desirable data quality outcomes (e.g., validity and reliability) and AD questions are associated with undesirable outcomes (e.g., acquiescence, response effects, etc.). Based on available research, models of cognitive processing, and a review of question characteristics, we recommended IS questions over AD questions for most purposes. For researchers considering the use of previously administered AD questions and instruments, issues surrounding the challenges of translating questions from AD to IS response formats are discussed.
Collapse
Affiliation(s)
- Jennifer Dykema
- University of Wisconsin Survey Center, University of Wisconsin, Madison, USA; Department of Sociology, University of Wisconsin, Madison, USA.
| | - Nora Cate Schaeffer
- University of Wisconsin Survey Center, University of Wisconsin, Madison, USA; Department of Sociology, University of Wisconsin, Madison, USA
| | - Dana Garbarski
- Department of Sociology, Loyola University, Chicago, USA
| | - Nadia Assad
- University of Wisconsin Survey Center, University of Wisconsin, Madison, USA
| | | |
Collapse
|
5
|
Affiliation(s)
- Dorothy Farrar Edwards
- University of Wisconsin Madison School of Medicine and Public Health Madison WI USA
- University of Wisconsin School of Medicine and Public Health Madison WI USA
- Wisconsin Alzheimer's Disease Research Center, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA Madison WI USA
| | | |
Collapse
|
6
|
Chin E, Croes K, Dykema J, Suver C, Hamann J, Truong A, Doerr M, Lah JJ, Goldstein FC, Blazel H, Manzanares C, Edwards DF. A qualitative analysis of study participant and study partner experiences with the consent process: Assessments guiding the development of an electronic consent (ECONSENT). Alzheimers Dement 2020. [DOI: 10.1002/alz.043917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Erin Chin
- University of Wisconsin School of Medicine and Public Health Madison WI USA
- Alzheimer's Disease Research Center Madison WI USA
| | - Kenneth Croes
- University of Wisconsin Survey Center Madison WI USA
| | | | | | | | | | | | - James J Lah
- Emory University School of Medicine Atlanta GA USA
| | | | - Hanna Blazel
- University of Wisconsin School of Medicine and Public Health Madison WI USA
| | | | - Dorothy Farrar Edwards
- University of Wisconsin School of Medicine and Public Health Department of Medicine Division of Geriatrics Madison WI USA
| |
Collapse
|
7
|
Dykema J, Stevenson J, Assad N, Kniss C, Taylor CA. Effects of Sequential Prepaid Incentives on Response Rates, Data Quality, Sample Representativeness, and Costs in a Mail Survey of Physicians. Eval Health Prof 2020; 44:235-244. [PMID: 32924566 DOI: 10.1177/0163278720958186] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
While collecting high quality data from physicians is critical, response rates for physician surveys are frequently low. A proven method for increasing response in mail surveys is to provide a small, prepaid monetary incentive in the initial mailing. More recently, researchers have begun experimenting with adding a second cash incentive in a follow-up contact in order to increase participation among more reluctant respondents. To assess the effects of sequential incentives on response rates, data quality, sample representativeness, and costs, physicians (N = 1,500) were randomly assigned to treatments that crossed the amount of a first ($5 or $10) and second ($0, $5, or $10) incentive to form the following groups: Group $5/$5; Group $5/$10; Group $10/$0; Group $10/$5; and Group $10/$10. Overall, second incentives were associated with higher response rates and lower costs per completed survey, and while they had no effect on item nonresponse, they increased sample representativeness.
Collapse
Affiliation(s)
- Jennifer Dykema
- University of Wisconsin Survey Center, 5228University of Wisconsin-Madison, WI, USA.,Department of Sociology, 5228University of Wisconsin-Madison, WI, USA
| | - John Stevenson
- University of Wisconsin Survey Center, 5228University of Wisconsin-Madison, WI, USA
| | - Nadia Assad
- University of Wisconsin Survey Center, 5228University of Wisconsin-Madison, WI, USA
| | - Chad Kniss
- University of Wisconsin Survey Center, 5228University of Wisconsin-Madison, WI, USA
| | - Catherine A Taylor
- Department of Global Community Health and Behavioral Sciences, 5783Tulane University School of Public Health and Tropical Medicine, New Orleans, LA, USA
| |
Collapse
|
8
|
Sandgren EP, Streiffer R, Dykema J, Assad N, Moberg J. Assessing undergraduate student and faculty views on animal research: What do they know, whom do they trust, and how much do they care? PLoS One 2019; 14:e0223375. [PMID: 31647851 PMCID: PMC6812826 DOI: 10.1371/journal.pone.0223375] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 09/19/2019] [Indexed: 01/17/2023] Open
Abstract
Research using animals is controversial. To develop sound public outreach and policy about this issue, we need information about both the underlying science and people’s attitudes and knowledge. To identify attitudes toward this subject at the University of Wisconsin-Madison, we developed and administered a survey to undergraduate students and faculty. The survey asked respondents about the importance of, their confidence in their knowledge about, and who they trusted to provide information on animal research. Findings indicated attitudes varied by academic discipline, especially among faculty. Faculty in the biological sciences, particularly those who had participated in an animal research project, reported the issue to be most important, and they reported greater confidence in their knowledge about pro and con arguments. Among students, being female, a vegetarian/vegan, or participating in animal research were associated with higher ratings of importance. Confidence in knowledge about regulation and its adequacy was very low across all groups except biological science faculty. Both students and faculty identified university courses and spokespersons to be the most trusted sources of information about animal research. UW-Madison has a long history of openness about animal research, which correlates with the high level of trust by students and faculty. Nevertheless, confidence in knowledge about animal research and its regulation remains limited, and both students and faculty indicated their desire to receive more information from courses and spokespersons. Based on these findings, we argue that providing robust university-wide outreach and course-based content about animal research should be considered an organizational best practice, in particular for colleges and universities.
Collapse
Affiliation(s)
- Eric P. Sandgren
- Pathobiololgical Sciences, School of Veterinary Medicine, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
- * E-mail:
| | - Robert Streiffer
- Medical History and Bioethics, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Jennifer Dykema
- University of Wisconsin-Madison Survey Center, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Nadia Assad
- University of Wisconsin-Madison Survey Center, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Jackson Moberg
- University of Wisconsin-Madison Survey Center, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| |
Collapse
|
9
|
Garbarski D, Schaeffer NC, Dykema J. Interviewers' Ratings of Respondents' Health: Predictors and Association With Mortality. J Gerontol B Psychol Sci Soc Sci 2019; 74:1213-1221. [PMID: 29220523 PMCID: PMC6748795 DOI: 10.1093/geronb/gbx146] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 10/30/2017] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVES Recent research indicates that survey interviewers' ratings of respondents' health (IRH) may provide supplementary health information about respondents in surveys of older adults. Although IRH is a potentially promising measure of health to include in surveys, our understanding of the factors contributing to IRH remains incomplete. METHODS We use data from the 2011 face-to-face wave of the Wisconsin Longitudinal Study, a longitudinal study of older adults from the Wisconsin high school class of 1957 and their selected siblings. We first examine whether a range of factors predict IRH: respondents' characteristics that interviewers learn about and observe as respondents answer survey questions, interviewers' evaluations of some of what they observe, and interviewers' characteristics. We then examine the role of IRH, respondents' self-rated health (SRH), and associated factors in predicting mortality over a 3-year follow-up. RESULTS As in prior studies, we find that IRH is associated with respondents' characteristics. In addition, this study is the first to document how IRH is associated with both interviewers' evaluations of respondents and interviewers' characteristics. Furthermore, we find that the association between IRH and the strong criterion of mortality remains after controlling for respondents' characteristics and interviewers' evaluations of respondents. DISCUSSION We propose that researchers incorporate IRH in surveys of older adults as a cost-effective, easily implemented, and supplementary measure of health.
Collapse
Affiliation(s)
- Dana Garbarski
- Department of Sociology, Loyola University Chicago, Illinois
| | - Nora Cate Schaeffer
- Department of Sociology, University of Wisconsin-Madison
- University of Wisconsin Survey Center, University of Wisconsin-Madison
| | - Jennifer Dykema
- University of Wisconsin Survey Center, University of Wisconsin-Madison
| |
Collapse
|
10
|
Dykema J, Garbarski D, Wal IF, Edward DF. Measuring Trust in Medical Researchers: Adding Insights from Cognitive Interviews to Examine Agree-Disagree and Construct-Specific Survey Questions. J Off Stat 2019; 35:353-386. [PMID: 33542588 PMCID: PMC7857484 DOI: 10.2478/jos-2019-0017] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
While scales measuring subjective constructs historically rely on agree-disagree (AD) questions, recent research demonstrates that construct-specific (CS) questions clarify underlying response dimensions that AD questions leave implicit and CS questions often yield higher measures of data quality. Given acknowledged issues with AD questions and certain established advantages of CS items, the evidence for the superiority of CS questions is more mixed than one might expect. We build on previous investigations by using cognitive interviewing to deepen understanding of AD and CS response processing and potential sources of measurement error. We randomized 64 participants to receive an AD or CS version of a scale measuring trust in medical researchers. We examine several indicators of data quality and cognitive response processing including: reliability, concurrent validity, recency, response latencies, and indicators of response processing difficulties (e.g., uncodable answers). Overall, results indicate reliability is higher for the AD scale, neither scale is more valid, and the CS scale is more susceptible to recency effects for certain questions. Results for response latencies and behavioral indicators provide evidence that the CS questions promote deeper processing. Qualitative analysis reveals five sources of difficulties with response processing that shed light on under-examined reasons why AD and CS questions can produce different results, with CS not always yielding higher measures of data quality than AD.
Collapse
Affiliation(s)
- Jennifer Dykema
- University of Wisconsin Survey Center (UWSC), 4308 Sterling Hall, 475 N. Charter St., Madison, WI 53706, U.S.A
| | - Dana Garbarski
- Loyola University Chicago, Coffey Hall 440, 1032 W. Sheridan Rd., Chicago, IL 60660, U.S.A
| | - Ian F Wal
- Steelcase, 901 44th Street SE, Grand Rapids, MI, 49508, U.S.A
| | - Dorothy Farrar Edward
- University of Wisconsin-Madison, 2176 Medical Science Center, 1300 University Avenue, Madison, WI 53706, U.S.A
| |
Collapse
|
11
|
Garbarski D, Dykema J, Croes KD, Edwards DF. How participants report their health status: cognitive interviews of self-rated health across race/ethnicity, gender, age, and educational attainment. BMC Public Health 2017; 17:771. [PMID: 28978325 PMCID: PMC5628425 DOI: 10.1186/s12889-017-4761-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 09/14/2017] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Self-rated health (SRH) is widely used to measure subjective health. Yet it is unclear what underlies health ratings, with implications for understanding the validity of SRH overall and across sociodemographic characteristics. We analyze participants' explanations of how they formulated their SRH answer in addition to which health factors they considered and examine group differences in these processes. METHODS Cognitive interviews were conducted with 64 participants in a convenience quota sample crossing dimensions of race/ethnicity (white, Latino, black, American Indian), gender, age, and education. Participants rated their health then described their thoughts when answering SRH. We coded participants' answers in an inductive, iterative, and systematic process from interview transcripts, developing analytic categories (i.e., themes) and subdimensions within. We examined whether the presence of each dimension of an analytic category varied across sociodemographic groups. RESULTS Our qualitative analysis led to the identification and classification of various subdimensions of the following analytic categories: types of health factors mentioned, valence of health factors, temporality of health factors, conditional health statements, and descriptions and definitions of health. We found differences across groups in some types of health factors mentioned-corresponding, conflicting, or novel with respect to prior research. Furthermore, we also documented various processes through which respondents integrate seemingly disparate health factors to formulate an answer through valence and conditional health statements. Finally, we found some evidence of sociodemographic group differences with respect to types of health factors mentioned, valence of health factors, and conditional health statements, highlighting avenues for future research. CONCLUSION This study provides a description of how participants rate their general health status and highlights potential differences in these processes across sociodemographic groups, helping to provide a more comprehensive understanding of how SRH functions as a measure of health.
Collapse
Affiliation(s)
- Dana Garbarski
- Department of Sociology, Loyola University Chicago, Coffey Hall 440, 1032 W. Sheridan Rd, Chicago, IL, 60660, USA.
| | - Jennifer Dykema
- University of Wisconsin Survey Center, University of Wisconsin-Madison, 475 N. Charter Street, Room 4308, Madison, WI, 53706, USA
| | - Kenneth D Croes
- University of Wisconsin Survey Center, University of Wisconsin-Madison, 475 N. Charter Street, Room 4416, Madison, WI, 53706, USA
| | - Dorothy F Edwards
- Departments of Kinesiology-Occupational Therapy Program, Neurology and Medicine, University of Wisconsin-Madison, 2170 Medical Science Center, 1300 University Avenue, Madison, WI, 53706-1532, USA
| |
Collapse
|
12
|
Abstract
Explanatory style is a cognitive personality variable with diverse correlates reflecting good versus bad adaptation. It is usually measured with the Attributional Style Questionnaire (ASQ), but existing versions of this instrument can be difficult for research participants to complete without close supervision. We describe a new version of the ASQ and its use in a mail survey of 146 college students. Results support its efficiency, reliability, and validity. A satisfactory response rate of 70%o was attained. Very few items were omitted among the questionnaires returned (1.3%). Subscale reliabilities were satisfactory (alphas > .70), and the new ASQ correlated with reports of depressive symptoms ( rs > .28), suggesting its appropriateness for general use with adults, including survey research.
Collapse
|
13
|
Garbarski D, Schaeffer NC, Dykema J. Interviewing Practices, Conversational Practices, and Rapport: Responsiveness and Engagement in the Standardized Survey Interview. Sociol Methodol 2016; 46:1-38. [PMID: 27867231 PMCID: PMC5110268 DOI: 10.1177/0081175016637890] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
"Rapport" has been used to refer to a range of positive psychological features of an interaction -- including a situated sense of connection or affiliation between interactional partners, comfort, willingness to disclose or share sensitive information, motivation to please, or empathy. Rapport could potentially benefit survey participation and response quality by increasing respondents' motivation to participate, disclose, or provide accurate information. Rapport could also harm data quality if motivation to ingratiate or affiliate caused respondents to suppress undesirable information. Some previous research suggests that motives elicited when rapport is high conflict with the goals of standardized interviewing. We examine rapport as an interactional phenomenon, attending to both the content and structure of talk. Using questions about end-of-life planning in the 2003-2005 wave of the Wisconsin Longitudinal Study, we observe that rapport consists of behaviors that can be characterized as dimensions of responsiveness by interviewers and engagement by respondents. We identify and describe types of responsiveness and engagement in selected question-answer sequences and then devise a coding scheme to examine their analytic potential with respect to the criterion of future study participation. Our analysis suggests that responsive and engaged behaviors vary with respect to the goals of standardization-some conflict with these goals, while others complement them.
Collapse
Affiliation(s)
| | - Nora Cate Schaeffer
- Department of Sociology, University of Wisconsin-Madison; University of Wisconsin Survey Center, University of Wisconsin-Madison
| | - Jennifer Dykema
- University of Wisconsin Survey Center, University of Wisconsin-Madison
| |
Collapse
|
14
|
Abstract
PURPOSE Following calls for replication of research studies, this study documents the results of two studies that experimentally examine the impact of response option order on self-rated health (SRH). METHODS Two studies from an online panel survey examined how the order of response options (positive to negative versus negative to positive) influences the distribution of SRH answers. RESULTS The results of both studies indicate that the distribution of SRH varies across the experimental treatments, and mean SRH is lower (worse) when the response options start with "poor" rather than "excellent." In addition, there are differences across the two studies in the distribution of SRH and mean SRH when the response options begin with "excellent," but not when the response options begin with "poor." CONCLUSION The similarities in the general findings across the two studies strengthen the claim that SRH will be lower (worse) when the response options are ordered beginning with "poor" rather than "excellent" in online self-administered questionnaires, with implications for the validity of SRH. The slight differences in the administration of the seemingly identical studies further strengthen the claim and also serve as a reminder of the inherent variability of a single permutation of any given study.
Collapse
Affiliation(s)
- Dana Garbarski
- Department of Sociology, Loyola University Chicago, Coffey Hall 440, 1032 W. Sheridan Rd., Chicago, IL, 60660, USA.
| | - Nora Cate Schaeffer
- Department of Sociology, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI, 53706, USA.,University of Wisconsin Survey Center, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI, 53706, USA
| | - Jennifer Dykema
- University of Wisconsin Survey Center, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI, 53706, USA
| |
Collapse
|
15
|
Dykema J, Cyffka K, Banghart M, Nordheim EV, Garbaski D, Schaeffer NC. The Impact of Parenthetical Phrases on Interviewers' and Respondents' Processing of Survey Questions. ACTA ACUST UNITED AC 2016; 9. [PMID: 31467801 DOI: 10.29115/sp-2016-0008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Many surveys contain sets of questions (e.g., batteries), in which the same phrase, such as a reference period or a set of response categories, applies across the set. When formatting questions for interviewer administration, question writers often enclose these repeated phrases in parentheses to signal that interviewers have the option of reading the phrase. Little research, however, examines what impact this practice has on data quality. We explore whether the presence and use of parenthetical statements is associated with indicators of processing problems for both interviewers and respondents, including the interviewer's ability to read the question exactly as worded, and the respondent's ability to answer the question without displaying problems answering (e.g., expressing uncertainty). Data are from questions about physical and mental health from 355 digitally recorded, transcribed, and interaction-coded telephone interviews. We implement a mixed-effects model with crossed random effects and nested and crossed fixed effects. The models also control for some respondent and interviewer characteristics. Findings indicate respondents are less likely to exhibit a problem when parentheticals are read, but reading the parentheticals increase the odds (marginally significant) that interviewers will make a reading error.
Collapse
|
16
|
Edwards DF, Dowling NM, Dykema J. P4‐037: Factors associated with the likelihood of providing cerebrospinal fluid for a medical research study in an ethnically diverse sample of adults. Alzheimers Dement 2015. [DOI: 10.1016/j.jalz.2015.06.1741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Dorothy Farrar Edwards
- Wisconsin Alzheimer's Disease Research CenterUniversity of Wisconsin School of Medicine and Public HealthMadisonWIUSA
| | | | | |
Collapse
|
17
|
Garbarski D, Schaeffer NC, Dykema J. The effects of response option order and question order on self-rated health. Qual Life Res 2014; 24:1443-53. [PMID: 25409654 DOI: 10.1007/s11136-014-0861-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2014] [Indexed: 10/24/2022]
Abstract
OBJECTIVES This study aims to assess the impact of response option order and question order on the distribution of responses to the self-rated health (SRH) question and the relationship between SRH and other health-related measures. METHODS In an online panel survey, we implement a 2-by-2 between-subjects factorial experiment, manipulating the following levels of each factor: (1) order of response options ("excellent" to "poor" versus "poor" to "excellent") and (2) order of SRH item (either preceding or following the administration of domain-specific health items). We use Chi-square difference tests, polychoric correlations, and differences in means and proportions to evaluate the effect of the experimental treatments on SRH responses and the relationship between SRH and other health measures. RESULTS Mean SRH is higher (better health) and proportion in "fair" or "poor" health lower when response options are ordered from "excellent" to "poor" and SRH is presented first compared to other experimental treatments. Presenting SRH after domain-specific health items increases its correlation with these items, particularly when response options are ordered "excellent" to "poor." Among participants with the highest level of current health risks, SRH is worse when it is presented last versus first. CONCLUSION While more research on the presentation of SRH is needed across a range of surveys, we suggest that ordering response options from "poor" to "excellent" might reduce positive clustering. Given the question order effects found here, we suggest presenting SRH before domain-specific health items in order to increase inter-survey comparability, as domain-specific health items will vary across surveys.
Collapse
Affiliation(s)
- Dana Garbarski
- Department of Sociology, Loyola University Chicago, 1032 West Sheridan Road, Chicago, IL, 60660, USA,
| | | | | |
Collapse
|
18
|
Abstract
The versatility, speed, and reduced costs with which web surveys can be conducted with clinicians are often offset by low response rates. Drawing on best practices and general recommendations in the literature, we provide an evidence-based overview of methods for conducting online surveys with providers. We highlight important advantages and disadvantages of conducting provider surveys online and include a review of differences in response rates between web and mail surveys of clinicians. When administered online, design-based features affect rates of survey participation and data quality. We examine features likely to have an impact including sample frames, incentives, contacts (type, timing, and content), mixed-mode approaches, and questionnaire length. We make several recommendations regarding optimal web-based designs, but more empirical research is needed, particularly with regard to identifying which combinations of incentive and contact approaches yield the highest response rates and are the most cost-effective.
Collapse
Affiliation(s)
| | | | - Tara Piché
- University of Wisconsin–Madison, WI, USA
| | | |
Collapse
|
19
|
Dykema J, Diloreto K, Price JL, White E, Schaeffer NC. ACASI Gender-of-Interviewer Voice Effects on Reports to Questions about Sensitive
Behaviors Among Young Adults. Public Opin Q 2012; 76:311-325. [PMID: 24991062 PMCID: PMC4079084 DOI: 10.1093/poq/nfs021] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Although previous research indicates that audio computer-assisted self-interviewing (ACASI) yields higher reports of threatening behaviors than interviewer-administered interviews, very few studies have examined the potential effect of the gender of the ACASI voice on survey reports. Because the voice in ACASI necessarily has a gender, it is important to understand whether using a voice that is perceived as male or female might further enhance the validity associated with ACASI. This study examines gender-of-voice effects for a set of questions about sensitive behaviors administered via ACASI to a sample of young adults at high risk for engaging in the behaviors. Results showed higher levels of engagement in the behaviors and more consistent reporting among males when responding to a female voice, indicating that males were potentially more accurate when reporting to the female voice. Reports by females were not influenced by the voice's gender. Our analysis adds to research on gender-of-voice effects in surveys, with important findings on measuring sensitive behaviors among young adults.
Collapse
Affiliation(s)
- Jennifer Dykema
- Jennifer Dykema is a research scientist and survey
methodologist at the University of Wisconsin Survey Center, Madison, WI, USA. Kerryann
Diloreto is a senior project director at the University of Wisconsin Survey Center,
Madison, WI, USA. Jessica Price is a project director at the University of
Wisconsin Survey Center, Madison, WI, USA. Eric White is the technology director
at the University of Wisconsin Survey Center, Madison, WI, USA. Nora Cate Schaffer
is Sewell Bascom Professor of Sociology at the University of Wisconsin–Madison
and the Faculty Director of the University of Wisconsin Survey Center, Madison, WI, USA.
The authors thank the Midwest Evaluation of the Adult Functioning of Former Foster
Youth—sponsored by welfare agencies in Illinois, Iowa, and Wisconsin and by the
Chapin Hall Center for Children at the University of Chicago and conducted by the
University of Wisconsin Survey Center (UWSC)—for including this experiment. The
research reported here was supported in part by the Wisconsin Center for Demography and
Ecology, National Institute of Child Health and Human Development Center Grant [R24
HD047873 to James R. Walker], and by the University of Wisconsin Survey Center (UWSC),
which receives support from the College of Letters and Science. The authors also thank the
editor and anonymous reviewers for suggesting that we conduct a more formal evaluation of
the voice’s characteristics, Mick Couper for advice during the early development of
the experimental design, Mark Courtney for permission to implement the gender-of-voice
experiment, Robert Schultz for assisting with the implementation of the evaluation of the
voice’s characteristics, Doug Hemken for advice on the analysis, and John Stevenson
and Steven Blixt for providing comments on earlier drafts. A preliminary version of this
article was presented at the 2010 meeting of the American Association for Public Opinion
Research and appeared in the 2010 proceedings of the Survey Research Methods Section of
the American Statistical Association. Opinions expressed here are those of the authors and
do not necessarily reflect those of the sponsors or related organizations.
Address correspondence to Jennifer Dykema, University of Wisconsin Survey
Center, University of Wisconsin–Madison, 475 N. Charter Street, Room 4308, Madison,
WI 53706, USA; e-mail:
| | | | | | | | | |
Collapse
|
20
|
Abstract
We begin with a look back at the field to identify themes of recent research that we expect to continue to occupy researchers in the future. As part of this overview, we characterize the themes and topics examined in research about measurement and survey questions published in Public Opinion Quarterly in the past decade. We then characterize the field more broadly by highlighting topics that we expect to continue or to grow in importance, including the relationship between survey questions and the total survey error perspective, cognitive versus interactional approaches, interviewing practices, mode and technology, visual aspects of question design, and culture. Considering avenues for future research, we advocate for a decision-oriented framework for thinking about survey questions and their characteristics. The approach we propose distinguishes among various aspects of question characteristics, including question topic, question type and response dimension, conceptualization and operationalization of the target object, question structure, question form, response categories, question implementation, and question wording. Thinking about question characteristics more systematically would allow study designs to take into account relationships among these characteristics and identify gaps in current knowledge.
Collapse
Affiliation(s)
- Nora Cate Schaeffer
- NORA CATE SCHAEFFER is the Sewell Bascom Professor of Sociology at the University of Wisconsin–Madison and the Faculty Director of the University of Wisconsin Survey Center, Madison, WI, USA. JENNIFER DYKEMA is an Associate Research Scientist and Survey Methodologist at the University of Wisconsin Survey Center, Madison, WI, USA. We wish to thank Barbara Forsyth, Eleanor Singer, Roger Tourangeau, Gordon Willis, and the POQ editors for various helpful contributions and comments, and Jeremy Beach for research assistance. The research reported here was supported in part by the Wisconsin Center for Demography and Ecology (National Institute of Child Health and Human Development Center Grant [R24 HD047873 to James R. Walker]); by the Wisconsin Center for Demography of Health and Aging (National Institute on Aging Center Grant [P30 AG017266 to Robert M. Hauser]); and by the University of Wisconsin Survey Center (UWSC), which receives support from the College of Letters and Science
| | | |
Collapse
|
21
|
Cardenas K, Wiersma G, Dykema J, Rossman L, Fedewa J, Jones J. 279 Impact of the Victim's Menstrual Cycle Phase on Genital Injuries Following Sexual Assault. Ann Emerg Med 2011. [DOI: 10.1016/j.annemergmed.2011.06.309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
22
|
Garbarski D, Schaeffer NC, Dykema J. Are Interactional Behaviors Exhibited When the Self-Reported Health Question is Asked Associated with Health Status? Soc Sci Res 2011; 40:1025-1036. [PMID: 21927518 PMCID: PMC3173777 DOI: 10.1016/j.ssresearch.2011.04.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
The self-reported health question summarizes information about health status across several domains of health and is widely used to measure health because it predicts mortality well. We examine whether interactional behaviors produced by respondents and interviewers during the self-reported health question-answer sequence reflect complexities in the respondent's health history. We observed more problematic interactional behaviors during question-answer sequences in which respondents reported worse health. Furthermore, these behaviors were more likely to occur when there were inconsistencies in the respondent's health history, even after controlling for the respondent's answer to the self-reported health question, cognitive ability, and sociodemographic characteristics. We also found that among respondents who reported "excellent" health, and to a lesser extent among those who reported their health was "very good," problematic interactional behaviors were associated with health inconsistencies. Overall, we find evidence that the interactional behaviors exhibited during the question-answer sequence are associated with respondents' health status.
Collapse
Affiliation(s)
- Dana Garbarski
- Department of Sociology, University of Wisconsin-Madison, 8128 William H. Sewell Social Science Building, 1180 Observatory Drive, Madison, WI 53706
- Department of Population Health Sciences, University of Wisconsin-Madison, 707 WARF Building, 610 N. Walnut St., Madison, WI 53726
| | - Nora Cate Schaeffer
- Department of Sociology, University of Wisconsin-Madison, 8128 William H. Sewell Social Science Building, 1180 Observatory Drive, Madison, WI 53706
- University of Wisconsin Survey Center, 475 N. Charter St., Madison, WI 53706
| | - Jennifer Dykema
- University of Wisconsin Survey Center, 475 N. Charter St., Madison, WI 53706
| |
Collapse
|
23
|
Dykema J, Stevenson J, Day B, Sellers SL, Bonham VL. Effects of incentives and prenotification on response rates and costs in a national web survey of physicians. Eval Health Prof 2011; 34:434-47. [PMID: 21613242 DOI: 10.1177/0163278711406113] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Little is known about what strategies are cost-effective in increasing participation among physicians in surveys that are conducted exclusively via the web. To assess the effects of incentives and prenotification on response rates and costs, general internists (N = 3,550) were randomly selected from the American Medical Association (AMA) Masterfile and assigned to experimental groups that varied in the amount of a promised incentive (none, entry into a $200 lottery, $50, or $100) and prenotification (none, prenotification letter only, or prenotification letter containing a $2 preincentive). Results indicated that the response rates were highest in the groups promised $100 and $50, respectively. While the postal prenotification letter increased response rates, the inclusion of a small token $2 preincentive had no effect on participation. Further, unlike mail surveys of physicians, the $2 preincentive was not cost-effective. Among physicians, larger promised incentives of $50 or $100 are more effective than a nominal preincentive in increasing participation in a web-only survey. Consistent with prior research, there was little evidence of nonresponse bias among the experimental groups.
Collapse
Affiliation(s)
- Jennifer Dykema
- University of Wisconsin Survey Center-University of Wisconsin-Madison, Madison, WI 53706, USA.
| | | | | | | | | |
Collapse
|
24
|
Dykema J, Stevenson J, Kniss C, Kvale K, González K, Cautley E. Use of Monetary and Nonmonetary Incentives to Increase Response Rates Among African Americans in the Wisconsin Pregnancy Risk Assessment Monitoring System. Matern Child Health J 2011; 16:785-91. [DOI: 10.1007/s10995-011-0780-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
25
|
|