51
|
Hallford DJ, Takano K, Raes F, Austin DW. Psychometric Evaluation of an Episodic Future Thinking Variant of the Autobiographical Memory Test – Episodic Future Thinking-Test (EFT-T). EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2020. [DOI: 10.1027/1015-5759/a000536] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Future-oriented variants of the Autobiographical Memory Test (AMT) are often used to assess the generation of specific episodic future thoughts, however, as yet the underlying factor structure of items in this modified test has not been examined. Therefore, over two studies we examined the factor structure and validity of an episodic future thinking variant of the Autobiographical Memory Test (Episodic Future Thinking-Test; EFT-T). In Study 1, exploratory factor analysis ( N = 466) showed a one-factor structure underlying responses to positive, negative, and concrete noun cue words on the EFT-T. In Study 2, confirmatory factor analysis with a different sample ( N = 304) and using different cue words showed a good fit for a single-factor structure. In both studies, good convergent validity was found with scores on the EFT-T correlating with autobiographical memory specificity scores, with support for divergent factors also. Mixed support was found for associations with measures of mental imagery, and the implications for measurement are discussed. These studies provide the first evidence that the EFT-T unidimensionally assesses specificity in episodic future thinking across two cue word sets.
Collapse
Affiliation(s)
- David J. Hallford
- School of Psychology, Deakin University, Geelong, Victoria, Australia
| | - Keisuke Takano
- Division of Clinical Psychology and Psychotherapy, Department of Psychology, Ludwig-Maximilians-University Munich, Germany
| | - Filip Raes
- Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | - David W. Austin
- School of Psychology, Deakin University, Geelong, Victoria, Australia
| |
Collapse
|
52
|
A little garbage in, lots of garbage out: Assessing the impact of careless responding in personality survey data. Behav Res Methods 2020; 52:2489-2505. [PMID: 32462604 DOI: 10.3758/s13428-020-01401-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In self-report surveys, it is common that some individuals do not pay enough attention and effort to give valid responses. Our aim was to investigate the extent to which careless and insufficient effort responding contributes to the biasing of data. We performed analyses of dimensionality, internal structure, and data reliability of four personality scales (extroversion, conscientiousness, stability, and dispositional optimism) in two independent samples. In order to identify careless/insufficient effort (C/IE) respondents, we used a factor mixture model (FMM) designed to detect inconsistencies of response to items with different semantic polarity. The FMM identified between 4.4% and 10% of C/IE cases, depending on the scale and the sample examined. In the complete samples, all the theoretical models obtained an unacceptable fit, forcing the rejection of the starting hypothesis and making additional wording factors necessary. In the clean samples, all the theoretical models fitted satisfactorily, and the wording factors practically disappeared. Trait estimates in the clean samples were between 4.5% and 11.8% more accurate than in the complete samples. These results show that a limited amount of C/IE data can lead to a drastic deterioration in the fit of the theoretical model, produce large amounts of spurious variance, raise serious doubts about the dimensionality and internal structure of the data, and reduce the reliability with which the trait scores of all surveyed are estimated. Identifying and filtering C/IE responses is necessary to ensure the validity of research results.
Collapse
|
53
|
Hong M, Steedle JT, Cheng Y. Methods of Detecting Insufficient Effort Responding: Comparisons and Practical Recommendations. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2020; 80:312-345. [PMID: 32158024 PMCID: PMC7047258 DOI: 10.1177/0013164419865316] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice for applied researchers and psychometricians when evaluating multiple sources of IER evidence, including the best strategy or combination of strategies when preprocessing data. In this study, we demonstrate how the use of different IER detection methods may affect psychometric properties such as predictive validity and reliability. Moreover, we evaluate how different data cleansing procedures can detect different types of IER. We provide evidence via simulation studies and applied analysis using the ACT's Engage assessment as a motivating example. Based on the findings of the study, we provide recommendations and future research directions for those who suspect their data may contain responses reflecting careless, random, or biased responding.
Collapse
Affiliation(s)
| | | | - Ying Cheng
- University of Notre Dame, Notre Dame,
IN, USA
| |
Collapse
|
54
|
Detecting computer-generated random responding in online questionnaires: An extension of Dupuis, Meier & Cuneo (2019) on dichotomous data. PERSONALITY AND INDIVIDUAL DIFFERENCES 2020. [DOI: 10.1016/j.paid.2020.109812] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
55
|
Chen Y, Thissen D, Anand D, Chen LH, Liang H, Daughters SB. Evaluating Differential Item Functioning (DIF) of the Chinese Version of the Behavioral Activation for Depression Scale (C-BADS). EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2020. [DOI: 10.1027/1015-5759/a000525] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Depression is prevalent in both China and Taiwan, and Behavioral Activation (BA), an Evidence-Based Treatment (EBT) for depression, is ideally suited for cross cultural implementation. As a first step, the current study examined cross cultural differences in the understanding of BA constructs, by investigating item level differences in functioning between the English and Chinese versions of Behavioral Activation for Depression Scale (BADS and C-BADS; Kanter, Mulick, Busch, Berlin, & Martell, 2007 ; Li, Ding, Kanter, Zeng, & Yang, 2014) . 752 college students were recruited from China, Taiwan, and the United States. Factorial invariance-based Differential Item Functioning (DIF) analysis was used to study item level differences in functioning for the BADS and C-BADS. Results. DIF was observed in the majority of BADS items, with items in the avoidance and impairment factors showing the greatest DIF. The constructs of avoidance and impairment demonstrate less cross-cultural generalizability compared to the activation construct. Suggestions for the implementation of DIF analysis for future cross cultural psychometric studies, and further modification of the C-BADS as a clinical assessment tool in China and Taiwan, are discussed.
Collapse
Affiliation(s)
- Yun Chen
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, NC, USA
| | - David Thissen
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, NC, USA
| | - Deepika Anand
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, NC, USA
| | - Lung Hung Chen
- Department of Positive Sport & Leisure Psychology, National Taiwan Sport University, Taiwan
| | - Hong Liang
- Department of Political Science, Huazhong University of Science and Technology, Wuhan, PR China
| | - Stacey B. Daughters
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, NC, USA
| |
Collapse
|
56
|
Detecting computer-generated random responding in questionnaire-based data: A comparison of seven indices. Behav Res Methods 2020; 51:2228-2237. [PMID: 30091086 DOI: 10.3758/s13428-018-1103-y] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
With the development of online data collection and instruments such as Amazon's Mechanical Turk (MTurk), the appearance of malicious software that generates responses to surveys in order to earn money represents a major issue, for both economic and scientific reasons. Indeed, even if paying one respondent to complete one questionnaire represents a very small cost, the multiplication of botnets providing invalid response sets may ultimately reduce study validity while increasing research costs. Several techniques have been proposed thus far to detect problematic human response sets, but little research has been undertaken to test the extent to which they actually detect nonhuman response sets. Thus, we proposed to conduct an empirical comparison of these indices. Assuming that most botnet programs are based on random uniform distributions of responses, we present and compare seven indices in this study to detect nonhuman response sets. A sample of 1,967 human respondents was mixed with different percentages (i.e., from 5% to 50%) of simulated random response sets. Three of the seven indices (i.e., response coherence, Mahalanobis distance, and person-total correlation) appear to be the best estimators for detecting nonhuman response sets. Given that two of those indices-Mahalanobis distance and person-total correlation-are calculated easily, every researcher working with online questionnaires could use them to screen for the presence of such invalid data.
Collapse
|
57
|
Hong MR, Cheng Y. Clarifying the Effect of Test Speededness. APPLIED PSYCHOLOGICAL MEASUREMENT 2019; 43:611-623. [PMID: 31551639 PMCID: PMC6745631 DOI: 10.1177/0146621618817783] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In the context of high-stakes tests, test takers who do not have enough time to complete a test rush toward the end and may engage in speeded behavior when tests do not penalize guessing. Using mathematical derivations and simulations, previous research showed that random guessing responses should attenuate interitem correlations, and therefore, decrease estimates of reliability. Meanwhile, other researchers showed that random guessing could in fact inflate reliability estimates using real data. We provide analytical derivations on how speededness could affect correlations between two dichotomous items in multiple ways, depending on the manifestation and prevalence of test speededness. Furthermore, we provide two simulation studies that evaluate the magnitude of impact of test speededness on interitem correlations and Cronbach's alpha. We found that the impact of test speededness can vary between item pairs and that it depends on the manifestation of test speededness and item level characteristics. Furthermore, speeded responses will, in general, attenuate or not affect reliability estimates, depending on the prevalence of such responses and conceptual interpretation of speeded responses. Implications of the findings are discussed.
Collapse
|
58
|
Abstract
Self-report data are common in psychological and survey research. Unfortunately, many of these samples are plagued with careless responses, due to unmotivated participants. The purpose of this study was to propose and evaluate a robust estimation method to detect careless or unmotivated responders, while leveraging item response theory (IRT) person-fit statistics. First, we outlined a general framework for robust estimation specific for IRT models. Subsequently, we conducted a simulation study covering multiple conditions in order to evaluate the performance of the proposed method. Ultimately, we showed that robust maximum marginal likelihood (RMML) estimation significantly improves detection rates for careless responders and reduces bias in item parameters across conditions. Furthermore, we applied our method to a real data set, to illustrate the utility of the proposed method. Our findings suggest that robust estimation coupled with person-fit statistics offers a powerful procedure to identify careless respondents for further review and to provide more accurate item parameter estimates in the presence of careless responses.
Collapse
|
59
|
Differentiating conscientious from indiscriminate responders in existing NEO-Five Factor Inventory-3 data. JOURNAL OF RESEARCH IN PERSONALITY 2019. [DOI: 10.1016/j.jrp.2019.05.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
60
|
Beck MF, Albano AD, Smith WM. Person-Fit as an Index of Inattentive Responding: A Comparison of Methods Using Polytomous Survey Data. APPLIED PSYCHOLOGICAL MEASUREMENT 2019; 43:374-387. [PMID: 31235983 PMCID: PMC6572906 DOI: 10.1177/0146621618798666] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Self-report measures are vulnerable to response biases that can degrade the accuracy of conclusions drawn from results. In low-stakes measures, inattentive or careless responding can be especially problematic. A variety of a priori and post hoc methods exist for detecting these aberrant response patterns. Previous research indicates that nonparametric person-fit statistics tend to be the most accurate post hoc method for detecting inattentive responding on measures with dichotomous outcomes. This study investigated the accuracy and impact on model fit of parametric and nonparametric person-fit statistics in detecting inattentive responding with polytomous response scales. Receiver operating curve (ROC) analysis was used to determine the accuracy of each detection metric, and confirmatory factor analysis (CFA) fit indices were used to examine the impact of using person-fit statistics to identify inattentive respondents. ROC analysis showed the nonparametric H T statistic offered the most area under the curve when predicting a proxy for inattentive responding. The CFA fit indices showed the impact of using the person-fit statistics largely depends on the purpose (and cutoff) for using the person-fit statistics. Implications for using person-fit statistics to identify inattentive responders are discussed further.
Collapse
Affiliation(s)
- Mark F. Beck
- University of Nebraska–Lincoln, NE, USA
- Mark F. Beck, Educational Psychology, University of Nebraska–Lincoln, 114 Teachers College Hall, Lincoln, NE 68588, USA.
| | | | | |
Collapse
|
61
|
Conijn JM, Franz G, Emons WHM, de Beurs E, Carlier IVE. The Assessment and Impact of Careless Responding in Routine Outcome Monitoring within Mental Health Care. MULTIVARIATE BEHAVIORAL RESEARCH 2019; 54:593-611. [PMID: 31001995 DOI: 10.1080/00273171.2018.1563520] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Careless responding by mental health patients on self-report assessments is rarely investigated in routine care despite the potential for serious consequences such as faulty clinical decisions. We investigated validity indices most appropriate for detecting careless responding in routine outcome monitoring (ROM) in mental health-care. First, we reviewed indices proposed in previous research for their suitability in ROM. Next, we evaluated six selected indices using data of the Brief Symptom Inventory and the Mood and Anxiety Symptom Questionnaire from 3,483 outpatients. Simulations showed that for typical ROM scales the Lmax index, Mahalanobis distance, and inter-item standard deviation may be too strongly confounded with the latent trait value to compare careless responding across patients with different symptom severity. Application of two different classification methods to the validity indices did not converge in similar prevalence estimates of careless responding. Finally, results suggest that careless responding does not have a substantial biasing effect on scale-score statistics. We recommend the lzp person-fit index to screen for random careless responding in large ROM data sets. However, additional research should further investigate methods for detecting repetitive responding in typical ROM data and assess whether there are specific circumstances in which simpler validity statistics or direct screening methods perform similarly as the lzp index.
Collapse
Affiliation(s)
- Judith M Conijn
- a Research Institute of Child Development and Education , University of Amsterdam , Amsterdam , the Netherlands
| | - Gunhild Franz
- b Institute of Psychology, Leiden University , Leiden , the Netherlands
| | - Wilco H M Emons
- c Tilburg School of Social and Behavioral Sciences , Tilburg University , Tilburg , the Netherlands
| | - Edwin de Beurs
- b Institute of Psychology, Leiden University , Leiden , the Netherlands
| | - Ingrid V E Carlier
- d Department of Psychiatry , Leiden University Medical Centre , Leiden , the Netherlands
| |
Collapse
|
62
|
Noncompliant responding: Comparing exclusion criteria in MTurk personality research to improve data quality. PERSONALITY AND INDIVIDUAL DIFFERENCES 2019. [DOI: 10.1016/j.paid.2019.02.015] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
63
|
Chen Y, Daughters SB, Thissen D, Salcedo S, Anand D, Chen LH, Liang H, Niu X, Su L. Cultural Differences in Environmental Reward Across Individuals in China, Taiwan, and the United States. JOURNAL OF PSYCHOPATHOLOGY AND BEHAVIORAL ASSESSMENT 2019. [DOI: 10.1007/s10862-019-09743-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
64
|
Conijn JM, Smits N, Hartman EE. Determining at What Age Children Provide Sound Self-Reports: An Illustration of the Validity-Index Approach. Assessment 2019; 27:1604-1618. [PMID: 30829047 DOI: 10.1177/1073191119832655] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
In psychological assessment of children, it is pivotal to establish from what age on self-reports can complement or replace informant reports. We introduce a psychometric approach to estimate the minimum age for a child to produce self-report data that is of similar quality as informant data. The approach makes use of statistical validity indicators such as person-fit and long-string indices, and can be readily applied to data commonly collected in psychometric studies of child measures. We evaluate and illustrate the approach, using self-report and informant-report data of the PedsQL, a pediatric health-related quality of life measure, from 651 child-mother pairs. To evaluate the approach, we tested various hypotheses about the validity of the self-report data, using the Gnp person-fit index as the validity indicator and the mother informant-data as a benchmark for validity. Results showed that Gnp discriminated between self-reports of younger and older children, between self-reports of children that completed the PedsQL alone or with a parent, and between self-reports and informant reports. We conclude that the validity-index approach has good potential for future applications. Future research should further evaluate the approach for different types of questionnaires (e.g., personality inventories) and using different validity indices (e.g., response-bias indices).
Collapse
Affiliation(s)
| | - Niels Smits
- University of Amsterdam, Amsterdam, Netherlands
| | | |
Collapse
|
65
|
Prestele E, Altstötter-Gleich C. Testgüte einer deutschen Version des Mehrdimensionalen Perfektionismus Kognitions-Inventars (MPCI-G). DIAGNOSTICA 2019. [DOI: 10.1026/0012-1924/a000211] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Zusammenfassung. Gegenstand der vorliegenden Arbeit ist die Entwicklung einer deutschsprachigen Version des Mehrdimensionalen Perfektionismus Kognitions-Inventars (MPCI-G [G = German]). In einer ersten Studie wurde die faktorielle Validität des MPCI-G überprüft. Auf Basis der Ergebnisse aus Studie 1 wurde der MPCI-G revidiert (MPCI-G-R). In Studie 2 wurden die Reliabilität, faktorielle und Konstruktvalidität des MPCI-G-R untersucht. Die Ergebnisse aus konfirmatorischen Faktorenanalysen, Korrelations- und multiplen Regressionsanalysen sprechen für die Reliabilität, faktorielle und Konstruktvalidität der 3 (korrelierten) Dimensionen perfektionistischer Kognitionen: Personal Standards (PSK), Concern over Mistakes (CMK) und Pursuit of Perfection Kognitionen (PPK). Unter anderem fanden sich differentielle Zusammenhänge der 3 Dimensionen mit dispositionellem Perfektionismus (Perfectionistic Strivings und Concerns), Affekt (schlechte Stimmung und Unruhe), Depressivität und der Zielsetzung für eine bevorstehende Prüfungsphase. Die reliable und valide multidimensionale Erfassung perfektionistischer Kognitionen, die zwischen eher positiven (PSK) und negativen Dimensionen (CMK und PPK) differenziert, stellt eine wertvolle Ergänzung zur Erforschung des dispositionellen Perfektionismus dar, welche das Verständnis dafür fördern kann, wie Dimensionen des dispositionellen Perfektionismus mit psychischem und physischem Wohlbefinden zusammenhängen.
Collapse
|
66
|
Examination of the validity of instructed response items in identifying careless respondents. PERSONALITY AND INDIVIDUAL DIFFERENCES 2018. [DOI: 10.1016/j.paid.2018.03.022] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
67
|
Sun X, So SHW, Chiu CD, Chan RCK, Leung PWL. Paranoia and anxiety: A cluster analysis in a non-clinical sample and the relationship with worry processes. Schizophr Res 2018; 197:144-149. [PMID: 29398206 DOI: 10.1016/j.schres.2018.01.024] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 11/08/2017] [Accepted: 01/21/2018] [Indexed: 10/18/2022]
Abstract
BACKGROUND Worry processes are implicated in paranoia and anxiety. However, clinical studies focused on patients with co-occurring paranoia and anxiety. As both paranoia and anxiety are distributed across clinical and non-clinical groups, an investigation on worry processes among non-clinical individuals will allow us to delineate the specific worry mechanisms in paranoia and anxiety respectively. AIMS To identify clusters of non-clinical individuals who report varied levels of paranoia and anxiety, and to compare worry processes across clusters. METHOD An online survey, consisting of self-report questionnaires on generalized anxiety, paranoia, and worry processes, was completed by 2796 undergraduate students. A multiple-step validity check procedure resulted in a subsample of 2291 students, upon which cluster analyses and multivariate analyses of variance were conducted. RESULTS Four clusters of individuals were identified: (1) high paranoia/moderate anxiety, (2) average paranoia/high anxiety, (3) average paranoia/average anxiety, and (4) low paranoia/low anxiety. A unique cluster of individuals with high paranoia but low/average level of anxiety was not found. Cluster 1 reported a significantly higher intensity of day-to-day worries, a higher level of meta-worry, and more extreme meta-cognitive beliefs about worry than other clusters. CONCLUSIONS Individuals with high paranoia tended to report anxiety as well, but not vice versa. Our findings supported a hierarchical structure of anxiety and paranoia. All worry processes were exacerbated in individuals with paranoia and anxiety than those with anxiety alone.
Collapse
Affiliation(s)
- Xiaoqi Sun
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong Special Administrative Region
| | - Suzanne Ho-Wai So
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong Special Administrative Region.
| | - Chui-De Chiu
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong Special Administrative Region
| | - Raymond Chor-Kiu Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Patrick Wing-Leung Leung
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong Special Administrative Region
| |
Collapse
|
68
|
Marjanovic Z, Bajkov L, MacDonald J. The Conscientious Responders Scale Helps Researchers Verify the Integrity of Personality Questionnaire Data. Psychol Rep 2018; 122:1529-1549. [PMID: 29914343 DOI: 10.1177/0033294118783917] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The Conscientious Responders Scale is a five-item embeddable validity scale that differentiates between conscientious and indiscriminate responding in personality-questionnaire data (CR & IR). This investigation presents further evidence of its validity and generalizability across two experiments. Study 1 tests its sensitivity to questionnaire length, a known cause of IR, and tries to provoke IR by manipulating psychological reactance. As expected, short questionnaires produced higher Conscientious Responders Scale scores than long questionnaires, and Conscientious Responders Scale scores were unaffected by reactance manipulations. Study 2 tests concerns that the Conscientious Responders Scale's unusual item content could potentially irritate and baffle responders, ironically increasing rates of IR. We administered two nearly identical questionnaires: one with an embedded Conscientious Responders Scale and one without the Conscientious Responders Scale. Psychometric comparisons revealed no differences across questionnaires' means, variances, interitem response consistencies, and Cronbach's alphas. In sum, the Conscientious Responders Scale is highly sensitive to questionnaire length-a known correlate of IR-and can be embedded harmlessly in questionnaires without provoking IR or changing the psychometrics of other measures.
Collapse
|
69
|
DeSimone JA, DeSimone AJ, Harms PD, Wood D. The Differential Impacts of Two Forms of Insufficient Effort Responding. APPLIED PSYCHOLOGY-AN INTERNATIONAL REVIEW-PSYCHOLOGIE APPLIQUEE-REVUE INTERNATIONALE 2017. [DOI: 10.1111/apps.12117] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
70
|
Conijn JM, van der Ark LA, Spinhoven P. Satisficing in Mental Health Care Patients: The Effect of Cognitive Symptoms on Self-Report Data Quality. Assessment 2017; 27:178-193. [PMID: 28703008 PMCID: PMC6906541 DOI: 10.1177/1073191117714557] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Respondents may use satisficing (i.e., nonoptimal) strategies when responding to
self-report questionnaires. These satisficing strategies become more likely with
decreasing motivation and/or cognitive ability (Krosnick, 1991). Considering
that cognitive deficits are characteristic of depressive and anxiety disorders,
depressed and anxious patients may be prone to satisficing. Using data from the
Netherland’s Study of Depression and Anxiety (N = 2,945), we
studied the relationship between depression and anxiety, cognitive symptoms, and
satisficing strategies on the NEO Five-Factor Inventory. Results showed that
respondents with either an anxiety disorder or a comorbid anxiety and depression
disorder used satisficing strategies substantially more often than healthy
respondents. Cognitive symptom severity partly mediated the effect of anxiety
disorder and comorbid anxiety disorder on satisficing. The results suggest that
depressed and anxious patients produce relatively low-quality self-report
data—partly due to cognitive symptoms. Future research should investigate the
degree of satisficing across different mental health care assessment
contexts.
Collapse
Affiliation(s)
- Judith M Conijn
- University of Amsterdam, Amsterdam, Netherlands.,Leiden University, Leiden, Netherlands
| | | | - Philip Spinhoven
- Leiden University, Leiden, Netherlands.,Leiden University Medical Center, Leiden, Netherlands
| |
Collapse
|