1
|
Scheeren AM, Olde Dubbelink L, Lever AG, Geurts HM. Two validation studies of a performance validity test for autistic adults. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-13. [PMID: 38279835 DOI: 10.1080/23279095.2024.2305206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2024]
Abstract
In two studies we examined the potential of a simple emotion recognition task, the Morel Emotional Numbing Test (MENT), as a performance validity test (PVT) for autism-related cognitive difficulties in adulthood. The aim of a PVT is to indicate non-credible performance, which can aid the interpretation of psychological assessments. There are currently no validated PVTs for autism-related difficulties in adulthood. In Study 1, non-autistic university students (aged 18-46 years) were instructed to simulate that they were autistic during a psychological assessment (simulation condition; n = 26). These students made more errors on the MENT than those instructed to do their best (control condition; n = 26). In Study 2, we tested how well autistic adults performed on the MENT. We found that clinically diagnosed autistic adults and non-autistic adults (both n = 25; 27-57 years; IQ > 80) performed equally well on the MENT. Moreover, autistic adults made significantly fewer errors than the instructed simulators in Study 1. The MENT reached a specificity of ≥98% (identifying 100% of non-simulators as non-simulator in Study 1 and 98% in Study 2) and a sensitivity of 96% (identifying 96% of simulators as simulator). Together these findings provide the first empirical evidence for the validity of the MENT as a potential PVT for autism-related cognitive difficulties.
Collapse
Affiliation(s)
- Anke M Scheeren
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Linda Olde Dubbelink
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Anne Geeke Lever
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Hilde M Geurts
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Dr. Leo Kannerhuis, autism clinic, Amsterdam, the Netherlands
| |
Collapse
|
2
|
Orrù G, De Marchi B, Sartori G, Gemignani A, Scarpazza C, Monaro M, Mazza C, Roma P. Machine learning item selection for short scale construction: A proof-of-concept using the SIMS. Clin Neuropsychol 2023; 37:1371-1388. [PMID: 36017966 DOI: 10.1080/13854046.2022.2114548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/12/2022] [Indexed: 11/03/2022]
Abstract
ObjectiveThis proof-of-concept paper provides evidence to support machine learning (ML) as a valid alternative to traditional psychometric techniques in the development of short forms of longer parent psychological tests. ML comprises a variety of feature selection techniques that can be efficiently applied to identify the set of items that best replicates the characteristics of the original test. MethodsIn the present study, we integrated a dataset of 329 participants from published and unpublished datasets used in previous research on the Structured Inventory of Malingered Symptomatology (SIMS) to develop a short version of the scale. The SIMS is a multi-axial self-report questionnaire and a highly efficient psychometric measure of symptom validity, which is frequently applied in forensic settings. Results State-of-the-art ML item selection techniques achieved a 72% reduction in length while capturing 92% of the variance of the original SIMS. The new SIMS short form now consists of 21 items. ConclusionsThe results suggest that the proposed ML-based item selection technique represents a promising alternative to standard psychometric correlation-based methods (i.e. item selection, item response theory), especially when selection techniques (e.g. wrapper) are employed that evaluate global, rather than local, item value.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Barbara De Marchi
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | | | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Cristina Mazza
- Department of Neuroscience, Imaging and Clinical Sciences, G. d'Annunzio University of Chieti-Pescara, Chieti, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
3
|
Chokka P, Bender A, Brennan S, Ahmed G, Corbière M, Dozois DJA, Habert J, Harrison J, Katzman MA, McIntyre RS, Liu YS, Nieuwenhuijsen K, Dewa CS. Practical pathway for the management of depression in the workplace: a Canadian perspective. Front Psychiatry 2023; 14:1207653. [PMID: 37732077 PMCID: PMC10508062 DOI: 10.3389/fpsyt.2023.1207653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/09/2023] [Indexed: 09/22/2023] Open
Abstract
Major depressive disorder (MDD) and other mental health issues pose a substantial burden on the workforce. Approximately half a million Canadians will not be at work in any week because of a mental health disorder, and more than twice that number will work at a reduced level of productivity (presenteeism). Although it is important to determine whether work plays a role in a mental health condition, at initial presentation, patients should be diagnosed and treated per appropriate clinical guidelines. However, it is also important for patient care to determine the various causes or triggers including work-related factors. Clearly identifying the stressors associated with the mental health disorder can help clinicians to assess functional limitations, develop an appropriate care plan, and interact more effectively with worker's compensation and disability programs, as well as employers. There is currently no widely accepted tool to definitively identify MDD as work-related, but the presence of certain patient and work characteristics may help. This paper seeks to review the evidence specific to depression in the workplace, and provide practical tips to help clinicians to identify and treat work-related MDD, as well as navigate disability issues.
Collapse
Affiliation(s)
- Pratap Chokka
- Department of Psychiatry, University of Alberta, Grey Nuns Hospital, Edmonton, AB, Canada
| | - Ash Bender
- Work, Stress and Health Program, The Centre for Addiction and Mental Health, Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Stefan Brennan
- Department of Psychiatry, University of Saskatchewan, Royal University Hospital, Saskatoon, SK, Canada
| | - Ghalib Ahmed
- Department of Family Medicine and Psychiatry, University of Alberta, Edmonton, AB, Canada
| | - Marc Corbière
- Department of Education, Career Counselling, Université du Québec à Montréal, Centre de Recherche de l’Institut Universitaire en Santé Mentale de Montréal, Montréal, QC, Canada
| | - David J. A. Dozois
- Department of Psychology, University of Western Ontario, London, ON, Canada
| | - Jeff Habert
- Department of Family and Community Medicine, University of Toronto, Toronto, ON, Canada
| | - John Harrison
- Metis Cognition Ltd., Kilmington, United Kingdom; Centre for Affective Disorders, Institute of Psychiatry, Psychology and Neuroscience, King’s College, London, United Kingdom; Alzheimercentrum, AUmc, Amsterdam, Netherlands
| | - Martin A. Katzman
- START Clinic for the Mood and Anxiety Disorders, Toronto, ON, Canada; Department of Psychiatry, Northern Ontario School of Medicine, and Department of Psychology, Lakehead University, Thunder Bay, ON, Canada
| | - Roger S. McIntyre
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Yang S. Liu
- Department of Psychiatry, University of Alberta, Edmonton, AB, Canada
| | - Karen Nieuwenhuijsen
- Department of Public and Occupational Health, Coronel Institute of Occupational Health, Amsterdam Public Health Research Institute, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - Carolyn S. Dewa
- Department of Psychiatry and Behavioural Sciences, University of California, Davis, Davis, CA, United States
| |
Collapse
|
4
|
Orrù G, Ordali E, Monaro M, Scarpazza C, Conversano C, Pietrini P, Gemignani A, Sartori G. Reconstructing individual responses to direct questions: a new method for reconstructing malingered responses. Front Psychol 2023; 14:1093854. [PMID: 37397336 PMCID: PMC10311065 DOI: 10.3389/fpsyg.2023.1093854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 05/22/2023] [Indexed: 07/04/2023] Open
Abstract
Introduction The false consensus effect consists of an overestimation of how common a subject opinion is among other people. This research demonstrates that individual endorsement of questions may be predicted by estimating peers' responses to the same question. Moreover, we aim to demonstrate how this prediction can be used to reconstruct the individual's response to a single item as well as the overall response to all of the items, making the technique suitable and effective for malingering detection. Method We have validated the procedure of reconstructing individual responses from peers' estimation in two separate studies, one addressing anxiety-related questions and the other to the Dark Triad. The questionnaires, adapted to our scopes, were submitted to the groups of participants for a total of 187 subjects across both studies. Machine learning models were used to estimate the results. Results According to the results, individual responses to a single question requiring a "yes" or "no" response are predicted with 70-80% accuracy. The overall participant-predicted score on all questions (total test score) is predicted with a correlation of 0.7-0.77 with actual results. Discussion The application of the false consensus effect format is a promising procedure for reconstructing truthful responses in forensic settings when the respondent is highly likely to alter his true (genuine) response and true responses to the tests are missing.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | | | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | | | - Ciro Conversano
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | | | - Angelo Gemignani
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
5
|
Tornero-Costa R, Martinez-Millana A, Azzopardi-Muscat N, Lazeri L, Traver V, Novillo-Ortiz D. Methodological and Quality Flaws in the Use of Artificial Intelligence in Mental Health Research: Systematic Review. JMIR Ment Health 2023; 10:e42045. [PMID: 36729567 PMCID: PMC9936371 DOI: 10.2196/42045] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 11/02/2022] [Accepted: 11/20/2022] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is giving rise to a revolution in medicine and health care. Mental health conditions are highly prevalent in many countries, and the COVID-19 pandemic has increased the risk of further erosion of the mental well-being in the population. Therefore, it is relevant to assess the current status of the application of AI toward mental health research to inform about trends, gaps, opportunities, and challenges. OBJECTIVE This study aims to perform a systematic overview of AI applications in mental health in terms of methodologies, data, outcomes, performance, and quality. METHODS A systematic search in PubMed, Scopus, IEEE Xplore, and Cochrane databases was conducted to collect records of use cases of AI for mental health disorder studies from January 2016 to November 2021. Records were screened for eligibility if they were a practical implementation of AI in clinical trials involving mental health conditions. Records of AI study cases were evaluated and categorized by the International Classification of Diseases 11th Revision (ICD-11). Data related to trial settings, collection methodology, features, outcomes, and model development and evaluation were extracted following the CHARMS (Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies) guideline. Further, evaluation of risk of bias is provided. RESULTS A total of 429 nonduplicated records were retrieved from the databases and 129 were included for a full assessment-18 of which were manually added. The distribution of AI applications in mental health was found unbalanced between ICD-11 mental health categories. Predominant categories were Depressive disorders (n=70) and Schizophrenia or other primary psychotic disorders (n=26). Most interventions were based on randomized controlled trials (n=62), followed by prospective cohorts (n=24) among observational studies. AI was typically applied to evaluate quality of treatments (n=44) or stratify patients into subgroups and clusters (n=31). Models usually applied a combination of questionnaires and scales to assess symptom severity using electronic health records (n=49) as well as medical images (n=33). Quality assessment revealed important flaws in the process of AI application and data preprocessing pipelines. One-third of the studies (n=56) did not report any preprocessing or data preparation. One-fifth of the models were developed by comparing several methods (n=35) without assessing their suitability in advance and a small proportion reported external validation (n=21). Only 1 paper reported a second assessment of a previous AI model. Risk of bias and transparent reporting yielded low scores due to a poor reporting of the strategy for adjusting hyperparameters, coefficients, and the explainability of the models. International collaboration was anecdotal (n=17) and data and developed models mostly remained private (n=126). CONCLUSIONS These significant shortcomings, alongside the lack of information to ensure reproducibility and transparency, are indicative of the challenges that AI in mental health needs to face before contributing to a solid base for knowledge generation and for being a support tool in mental health management.
Collapse
Affiliation(s)
- Roberto Tornero-Costa
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas, Universitat Politècnica de València, Valencia, Spain
| | - Antonio Martinez-Millana
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas, Universitat Politècnica de València, Valencia, Spain
| | - Natasha Azzopardi-Muscat
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark
| | - Ledia Lazeri
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark
| | - Vicente Traver
- Instituto Universitario de Investigación de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas, Universitat Politècnica de València, Valencia, Spain
| | - David Novillo-Ortiz
- Division of Country Health Policies and Systems, World Health Organization, Regional Office for Europe, Copenhagen, Denmark
| |
Collapse
|
6
|
Bosso T, Vischia F, Keller R, Vai D, Imperiale D, Vercelli A. A case report and literature review of cognitive malingering and psychopathology. Front Psychiatry 2022; 13:981475. [PMID: 36311526 PMCID: PMC9613951 DOI: 10.3389/fpsyt.2022.981475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 09/27/2022] [Indexed: 11/13/2022] Open
Abstract
Malingering of cognitive difficulties constitutes a major issue in psychiatric forensic settings. Here, we present a selective literature review related to the topic of cognitive malingering, psychopathology and their possible connections. Furthermore, we report a single case study of a 60-year-old man with a long and ongoing judicial history who exhibits a suspicious multi-domain neurocognitive disorder with significant reduction of autonomy in daily living, alongside a longtime history of depressive symptoms. Building on this, we suggest the importance of evaluating malingering conditions through both psychiatric and neuropsychological assessment tools. More specifically, the use of Performance Validity Tests (PVTs)-commonly but not quite correctly considered as tests of "malingering"-alongside the collection of clinical history and the use of routine psychometric testing, seems to be crucial in order to detect discrepancies between self-reported patient's symptoms, embedded validity indicators and psychometric results.
Collapse
Affiliation(s)
- Tea Bosso
- Department of Psychology, University of Turin, Turin, Italy
| | - Flavio Vischia
- Cognitive Disorders Diagnosis and Treatment Centre, North-West Unit Amedeo di Savoia Hospital, ASL Città di Torino, Turin, Italy
| | - Roberto Keller
- Mental Health Department North-West Unit, Local Health Unit, ASL Città di Torino, Turin, Italy
| | - Daniela Vai
- Cognitive Disorders Diagnosis and Treatment Centre, North-West Unit Amedeo di Savoia Hospital, ASL Città di Torino, Turin, Italy
| | - Daniele Imperiale
- Cognitive Disorders Diagnosis and Treatment Centre, North-West Unit Amedeo di Savoia Hospital, ASL Città di Torino, Turin, Italy
| | - Alessandro Vercelli
- Department of Neuroscience "Rita Levi Montalcini", University of Turin, Turin, Italy
| |
Collapse
|
7
|
Bosi J, Minassian L, Ales F, Akca AYE, Winters C, Viglione DJ, Zennaro A, Giromini L. The sensitivity of the IOP-29 and IOP-M to coached feigning of depression and mTBI: An online simulation study in a community sample from the United Kingdom. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-13. [PMID: 36027614 DOI: 10.1080/23279095.2022.2115910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Assessing the credibility of symptoms is critical to neuropsychological assessment in both clinical and forensic settings. To this end, the Inventory of Problems-29 (IOP-29) and its recently added memory module (Inventory of Problems-Memory; IOP-M) appear to be particularly useful, as they provide a rapid and cost-effective measure of both symptom and performance validity. While numerous studies have already supported the effectiveness of the IOP-29, research on its newly developed module, the IOP-M, is much sparser. To address this gap, we conducted a simulation study with a community sample (N = 307) from the United Kingdom. Participants were asked to either (a) respond honestly or (b) pretend to suffer from mTBI or (c) pretend to suffer from depression. Within each feigning group, half of the participants received a description of the symptoms of the disorder to be feigned, and the other half received both a description of the symptoms of the disorder to be feigned and a warning not to over-exaggerate their responses or their presentation would not be credible. Overall, the results confirmed the effectiveness of the two IOP components, both individually and in combination.
Collapse
Affiliation(s)
- Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Laure Minassian
- Department of Psychology, University of Surrey, Guildford, UK
| | - Francesca Ales
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Christina Winters
- Tilburg Institute for Law, Technology, and Society (TLS), Tilburg University, Tilburg, The Netherlands
| | | | | | | |
Collapse
|
8
|
Analysis of malingered psychological symptoms in a clinical sample for early detection in initial interviews. Eur Arch Psychiatry Clin Neurosci 2022; 273:427-438. [PMID: 35587278 PMCID: PMC10070281 DOI: 10.1007/s00406-022-01422-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
Abstract
Malingering consists of the production of false physical or psychological symptoms motivated by external incentives that are normally reproduced in pathologies that are not related to organic origin or there are no laboratory tests for their diagnosis, as is the case of mixed anxiety-depressive disorder and fibromyalgia syndrome. The objective of this research consisted of comparing the profile of simulative patients with fibromyalgia and mixed anxiety-depressive disorder to obtain a profile and facilitate its detection in initial interviews. The research was carried out with 78 patients (42 patients with fibromyalgia and 36 patients with mixed anxiety-depressive disorder) who were administered the professional's structured clinical judgment, the Beck Depression Inventory, the State-Trait Anxiety Questionnaire, and the Structured Symptom Simulation Inventory. The main obtained results show that the simulation classification proposed by the questionnaire is in the range of 66.67-80% with regard to coinciding with the judgment of experts, and people with suspicion of simulation of both groups of patients present similar characteristics. The simulators thus present incongruous responses in relation to the questionnaires, and high levels of trait anxiety, state, and depression predict the simulation of symptoms.
Collapse
|
9
|
Giromini L, Pasqualini S, Corgiat Loia A, Pignolo C, Di Girolamo M, Zennaro A. A Survey of Practices and Beliefs of Italian Psychologists Regarding Malingering and Symptom Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09452-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractA few years ago, an article describing the current status of Symptom Validity Assessment (SVA) practices and beliefs in European countries reported that there was little research activity in Italy (Merten et al., 2013). The same article also highlighted that Italian practitioners were less inclined to use Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in their assessments, compared with their colleagues from other major European countries. Considering that several articles on malingering and SVA have been published by Italian authors in recent years, we concluded that an update of the practices and beliefs of Italian professionals regarding malingering and SVA would be beneficial. Accordingly, from a larger survey that examined general psychological assessment practices and beliefs of Italian professionals, we extracted a subset of items specifically related to malingering and SVA and analyzed the responses of a sample of Italian psychologists who have some experience with malingering-related assessments. Taken together, the results of our analyses indicated that even though our respondents tend to use SVTs and PVTs relatively often in their evaluations, at this time, they likely trust more their own personal observations, impressions, and overall clinical judgment, in their SVA practice. Additionally, our results also indicated that Italian practitioners with some familiarity with malingering-related evaluations consider malingering to occur in about one-third of psychological assessments in which the evaluee might have an interest in overreporting.
Collapse
|
10
|
Identifying Faked Responses in Questionnaires with Self-Attention-Based Autoencoders. INFORMATICS 2022. [DOI: 10.3390/informatics9010023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Deception, also known as faking, is a critical issue when collecting data using questionnaires. As shown by previous studies, people have the tendency to fake their answers whenever they gain an advantage from doing so, e.g., when taking a test for a job application. Current methods identify the general attitude of faking but fail to identify faking patterns and the exact responses affected. Moreover, these strategies often require extensive data collection of honest responses and faking patterns related to the specific questionnaire use case, e.g., the position that people are applying to. In this work, we propose a self-attention-based autoencoder (SABA) model that can spot faked responses in a questionnaire solely relying on a set of honest answers that are not necessarily related to its final use case. We collect data relative to a popular personality test (the 10-item Big Five test) in three different use cases, i.e., to obtain: (i) child custody in court, (ii) a position as a salesperson, and (iii) a role in a humanitarian organization. The proposed model outperforms by a sizeable margin in terms of F1 score three competitive baselines, i.e., an autoencoder based only on feedforward layers, a distribution model, and a k-nearest-neighbor-based model.
Collapse
|
11
|
Symptom and Performance Validity Assessment in European Countries: an Update. PSYCHOLOGICAL INJURY & LAW 2021; 15:116-127. [PMID: 34849185 PMCID: PMC8612718 DOI: 10.1007/s12207-021-09436-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/13/2021] [Indexed: 11/23/2022]
Abstract
In 2013, a special issue of the Spanish journal Clínica y Salud published a review on symptom and performance validity assessment in European countries (Merten et al. in Clínica y Salud, 24(3), 129–138, 2013). At that time, developments were judged to be in their infancy in many countries, with major publication activities stemming from only four countries: Spain, The Netherlands, Great Britain, and Germany. As an introduction to a special issue of Psychological Injury and Law, this is an updated report of developments during the last 10 years. In that period of time, research activities have reached a level where it is difficult to follow all developments; some validity measures were newly developed, others were adapted for European languages, and validity assessment has found a much stronger place in real-world evaluation contexts. Next to an update from the four nations mentioned above, reports are now given from Austria, Italy, and Switzerland, too.
Collapse
|
12
|
Monaro M, Bertomeu CB, Zecchinato F, Fietta V, Sartori G, De Rosario Martínez H. The detection of malingering in whiplash-related injuries: a targeted literature review of the available strategies. Int J Legal Med 2021; 135:2017-2032. [PMID: 33829284 PMCID: PMC8354940 DOI: 10.1007/s00414-021-02589-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 03/26/2021] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The present review is intended to provide an up-to-date overview of the strategies available to detect malingered symptoms following whiplash. Whiplash-associated disorders (WADs) represent the most common traffic injuries, having a major impact on economic and healthcare systems worldwide. Heterogeneous symptoms that may arise following whiplash injuries are difficult to objectify and are normally determined based on self-reported complaints. These elements, together with the litigation context, make fraudulent claims particularly likely. Crucially, at present, there is no clear evidence of the instruments available to detect malingered WADs. METHODS We conducted a targeted literature review of the methodologies adopted to detect malingered WADs. Relevant studies were identified via Medline (PubMed) and Scopus databases published up to September 2020. RESULTS Twenty-two methodologies are included in the review, grouped into biomechanical techniques, clinical tools applied to forensic settings, and cognitive-based lie detection techniques. Strengths and weaknesses of each methodology are presented, and future directions are discussed. CONCLUSIONS Despite the variety of techniques that have been developed to identify malingering in forensic contexts, the present work highlights the current lack of rigorous methodologies for the assessment of WADs that take into account both the heterogeneous nature of the syndrome and the possibility of malingering. We conclude that it is pivotal to promote awareness about the presence of malingering in whiplash cases and highlight the need for novel, high-quality research in this field, with the potential to contribute to the development of standardised procedures for the evaluation of WADs and the detection of malingering.
Collapse
Affiliation(s)
- Merylin Monaro
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy.
| | - Chema Baydal Bertomeu
- Instituto de Biomecánica de Valencia, Universitat Politècnica de Valencia, Ed. 9C. Camino de Vera s/n, 46022, Valencia, Spain
| | - Francesca Zecchinato
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy
| | - Valentina Fietta
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy
| | - Giuseppe Sartori
- Department of General Psychology, Università degli Studi di Padova, via Venezia 8, 35131, Padova, Italy
| | - Helios De Rosario Martínez
- Instituto de Biomecánica de Valencia, Universitat Politècnica de Valencia, Ed. 9C. Camino de Vera s/n, 46022, Valencia, Spain
- CIBER de Bioingeniería, Biomateriales Y Nanomedicina (CIBER-BBN), Zaragoza, Spain
| |
Collapse
|
13
|
Monaro M, De Rosario H, Baydal-Bertomeu JM, Bernal-Lafuente M, Masiero S, Macía-Calvo M, Cantele F, Sartori G. A model to differentiate WAD patients and people with abnormal pain behaviour based on biomechanical and self-reported tests. Int J Legal Med 2021; 135:1637-1646. [PMID: 33774707 PMCID: PMC8205908 DOI: 10.1007/s00414-021-02572-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 03/11/2021] [Indexed: 11/12/2022]
Abstract
The prevalence of malingering among individuals presenting whiplash-related symptoms is significant and leads to a huge economic loss due to fraudulent injury claims. Various strategies have been proposed to detect malingering and symptoms exaggeration. However, most of them have been not consistently validated and tested to determine their accuracy in detecting feigned whiplash. This study merges two different approaches to detect whiplash malingering (the mechanical approach and the qualitative analysis of the symptomatology) to obtain a malingering detection model based on a wider range of indices, both biomechanical and self-reported. A sample of 46 malingerers and 59 genuine clinical patients was tested using a kinematic test and a self-report questionnaire asking about the presence of rare and impossible symptoms. The collected measures were used to train and validate a linear discriminant analysis (LDA) classification model. Results showed that malingerers were discriminated from genuine clinical patients based on a greater proportion of rare symptoms vs. possible self-reported symptoms and slower but more repeatable neck motions in the biomechanical test. The fivefold cross-validation of the LDA model yielded an area under the curve (AUC) of 0.84, with a sensitivity of 77.8% and a specificity of 84.7%.
Collapse
Affiliation(s)
- Merylin Monaro
- Department of General Psychology, University of Padova, via Venezia 8, 35131, Padova, Italy.
| | - Helios De Rosario
- Instituto de Biomecánica de Valencia, Universitat Politècnica de Valencia, Ed. 9C. Camino de Vera s/n, 46022, Valencia, Spain.,CIBER de Bioingeniería, Biomateriales Y Nanomedicina (CIBER-BBN), Zaragoza, Spain
| | - José María Baydal-Bertomeu
- Instituto de Biomecánica de Valencia, Universitat Politècnica de Valencia, Ed. 9C. Camino de Vera s/n, 46022, Valencia, Spain
| | - Marta Bernal-Lafuente
- MAZ, Academia General, Mutua Colaboradora con la Seguridad Social nº 11. AvenidaMilitar 74, 50015, Zaragoza, Spain
| | - Stefano Masiero
- Department of Neuroscience, Section of Rehabilitation, University of Padova, Via Nicolò Giustiniani, 5, 35128, Padova, Italy
| | - Mónica Macía-Calvo
- MAZ, Academia General, Mutua Colaboradora con la Seguridad Social nº 11. AvenidaMilitar 74, 50015, Zaragoza, Spain
| | - Francesca Cantele
- Department of Neuroscience, Section of Rehabilitation, University of Padova, Via Nicolò Giustiniani, 5, 35128, Padova, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padova, via Venezia 8, 35131, Padova, Italy
| |
Collapse
|
14
|
Scarpazza C, Miolla A, Zampieri I, Melis G, Sartori G, Ferracuti S, Pietrini P. Translational Application of a Neuro-Scientific Multi-Modal Approach Into Forensic Psychiatric Evaluation: Why and How? Front Psychiatry 2021; 12:597918. [PMID: 33613339 PMCID: PMC7892615 DOI: 10.3389/fpsyt.2021.597918] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Accepted: 01/14/2021] [Indexed: 01/01/2023] Open
Abstract
A prominent body of literature indicates that insanity evaluations, which are intended to provide influential expert reports for judges to reach a decision "beyond any reasonable doubt," suffer from a low inter-rater reliability. This paper reviews the limitations of the classical approach to insanity evaluation and the criticisms to the introduction of neuro-scientific approach in court. Here, we explain why in our opinion these criticisms, that seriously hamper the translational implementation of neuroscience into the forensic setting, do not survive scientific scrutiny. Moreover, we discuss how the neuro-scientific multimodal approach may improve the inter-rater reliability in insanity evaluation. Critically, neuroscience does not aim to introduce a brain-based concept of insanity. Indeed, criteria for responsibility and insanity are and should remain clinical. Rather, following the falsificationist approach and the convergence of evidence principle, the neuro-scientific multimodal approach is being proposed as a way to improve reliability of insanity evaluation and to mitigate the influence of cognitive biases on the formulation of insanity opinions, with the final aim to reduce errors and controversies.
Collapse
Affiliation(s)
- Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- Department of Psychosis Studies, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, United Kingdom
| | - Alessio Miolla
- Department of General Psychology, University of Padova, Padova, Italy
| | - Ilaria Zampieri
- Molecular Mind Laboratory, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giulia Melis
- Department of General Psychology, University of Padova, Padova, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padova, Padova, Italy
| | - Stefano Ferracuti
- Department of Human Neurosciences, “Sapienza” University of Rome, Rome, Italy
| | - Pietro Pietrini
- Molecular Mind Laboratory, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
15
|
Monaro M, Mazza C, Colasanti M, Ferracuti S, Orrù G, di Domenico A, Sartori G, Roma P. Detecting faking-good response style in personality questionnaires with four choice alternatives. PSYCHOLOGICAL RESEARCH 2021; 85:3094-3107. [PMID: 33452928 PMCID: PMC8476468 DOI: 10.1007/s00426-020-01473-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/29/2020] [Indexed: 11/06/2022]
Abstract
Deliberate attempts to portray oneself in an unrealistic manner are commonly encountered in the administration of personality questionnaires. The main aim of the present study was to explore whether mouse tracking temporal indicators and machine learning models could improve the detection of subjects implementing a faking-good response style when answering personality inventories with four choice alternatives, with and without time pressure. A total of 120 volunteers were randomly assigned to one of four experimental groups and asked to respond to the Virtuous Responding (VR) validity scale of the PPI-R and the Positive Impression Management (PIM) validity scale of the PAI via a computer mouse. A mixed design was implemented, and predictive models were calculated. The results showed that, on the PIM scale, faking-good participants were significantly slower in responding than honest respondents. Relative to VR items, PIM items are shorter in length and feature no negations. Accordingly, the PIM scale was found to be more sensitive in distinguishing between honest and faking-good respondents, demonstrating high classification accuracy (80–83%).
Collapse
Affiliation(s)
- Merylin Monaro
- Department of General Psychology, University of Padova, Padua, Italy.
| | - Cristina Mazza
- Department of Neuroscience, Imaging and Clinical Sciences, University "G.d'Annunzio", Chieti, Pescara, Italy
| | - Marco Colasanti
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferracuti
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Alberto di Domenico
- Department of Psychological, Health and Territorial Sciences, University "G.d'Annunzio", Chieti, Pescara, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padova, Padua, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
16
|
Orrù G, Mazza C, Monaro M, Ferracuti S, Sartori G, Roma P. The Development of a Short Version of the SIMS Using Machine Learning to Detect Feigning in Forensic Assessment. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09389-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractIn the present study, we applied machine learning techniques to evaluate whether the Structured Inventory of Malingered Symptomatology (SIMS) can be reduced in length yet maintain accurate discrimination between consistent participants (i.e., presumed truth tellers) and symptom producers. We applied machine learning item selection techniques on data from Mazza et al. (2019c) to identify the minimum number of original SIMS items that could accurately distinguish between consistent participants, symptom accentuators, and symptom producers in real personal injury cases. Subjects were personal injury claimants who had undergone forensic assessment, which is known to incentivize malingering and symptom accentuation. Item selection yielded short versions of the scale with as few as 8 items (to differentiate between consistent participants and symptom producers) and as many as 10 items (to differentiate between consistent and inconsistent participants). The scales had higher classification accuracy than the original SIMS and did not show the bias that was originally reported between false positives and false negatives.
Collapse
|
17
|
Grant AF, Lace JW, Teague CL, Lowell KT, Ruppert PD, Garner AA, Gfeller JD. Detecting feigned symptoms of depression, anxiety, and ADHD, in college students with the structured inventory of malingered symptomatology. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:443-451. [PMID: 32456475 DOI: 10.1080/23279095.2020.1769097] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objective: Research consistently shows how easily students can feign symptoms of ADHD on self-report checklists to determine eligibility for curricular and standardized testing accommodations. However, it is unclear how easily students can feign psychological symptoms to accesses academic accommodations, making the assessment of symptom validity important in both populations.Method: Using a between-subjects design, 75 college students were randomly assigned to one of three groups: (1) coached feigning of ADHD, (2) coached feigning of depression and anxiety (DA), and (3) honest responding (HR). Participants completed the Depression, Anxiety, and Stress Scale (DASS-21) and the Structured Inventory of Malingered Symptomatology (SIMS).Results: The SIMS showed 100% specificity, but low sensitivity (36-52%) for detecting feigned symptoms with different cutoffs. Differences on SIMS subtests were apparent by group with elevated scores for the DA group on the Affective Disorders subscale and elevation for the ADHD group on the Low Intelligence and Amnestic subscales. Participants identified as feigning by the SIMS typically reported more severe symptoms than participants not identified on the DASS-21.Conclusions: The SIMS equally classified the feigned ADHD and DA participants for both cutoff scores utilized. Potential reasons for low sensitivity rates are discussed and future research recommendations are made.
Collapse
Affiliation(s)
- Alexandra F Grant
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - John W Lace
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Carson L Teague
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Kimberly T Lowell
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Phillip D Ruppert
- Department of Psychiatry, Saint Louis University, St. Louis, MO, USA
| | - Annie A Garner
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Jeffrey D Gfeller
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| |
Collapse
|
18
|
Use of mouse-tracking software to detect faking-good behavior on personality questionnaires: an explorative study. Sci Rep 2020; 10:4835. [PMID: 32179844 PMCID: PMC7075885 DOI: 10.1038/s41598-020-61636-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 02/28/2020] [Indexed: 11/25/2022] Open
Abstract
The aim of the present study was to explore whether kinematic indicators could improve the detection of subjects demonstrating faking-good behaviour when responding to personality questionnaires. One hundred and twenty volunteers were randomly assigned to one of four experimental groups (honest unspeeded, faking-good unspeeded, honest speeded, and faking-good speeded). Participants were asked to respond to the MMPI-2 underreporting scales (L, K, S) and the PPI-R Virtuous Responding (VR) scale using a computer mouse. The collected data included T-point scores on the L, K, S, and VR scales; response times on these scales; and several temporal and spatial mouse parameters. These data were used to investigate the presence of significant differences between the two manipulated variables (honest vs. faking-good; speeded vs. unspeeded). The results demonstrated that T-scores were significantly higher in the faking-good condition relative to the honest condition; however, faking-good and honest respondents showed no statistically significant differences between the speeded and unspeeded conditions. Concerning temporal and spatial kinematic parameters, we observed mixed results for different scales and further investigations are required. The most consistent finding, albeit with small observed effects, regards the L scale, in which faking-good respondents took longer to respond to stimuli and outlined wider mouse trajectories to arrive at the given response.
Collapse
|
19
|
Cartwright A, Donkin R. Knowledge of Depression and Malingering: An Exploratory Investigation. EUROPES JOURNAL OF PSYCHOLOGY 2020; 16:32-44. [PMID: 33680168 PMCID: PMC7913031 DOI: 10.5964/ejop.v16i1.1730] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 06/12/2019] [Indexed: 11/20/2022]
Abstract
Malingering mental disorder for financial compensation can offer substantial rewards to those willing to do so. A recent review of UK medico-legal experts' practices for detecting claimants evidenced that they are not well equipped to detect those that do. This is not surprising, considering that very little is known regarding why individuals opt to malinger. A potential construct which may influence an individual's choice to malinger is their knowledge of the disorder, and when one considers the high levels of depression literacy within the UK, it is imperative that this hypothesis is investigated. A brief depression knowledge scale was devised and administered to undergraduate students (N = 155) alongside a series of questions exploring how likely participants were to malinger in both workplace stress and claiming for benefit vignettes. Depression knowledge did not affect the likelihood of engaging in any malingering strategy in either the workplace stress vignettes or the benefit claimant vignettes. Differences were found between the two vignettes providing evidence for the context-specific nature of malingering, and an individual's previous mental disorder was also influential.
Collapse
Affiliation(s)
- Ashley Cartwright
- Behavioural Sciences, School of Human and Health Sciences, University of Huddersfield, Huddersfield, United Kingdom
| | - Rebecca Donkin
- Department of Psychology, Leeds Trinity University, Leeds, United Kingdom
| |
Collapse
|
20
|
Ilgunaite G, Giromini L, Bosi J, Viglione DJ, Zennaro A. A clinical comparison simulation study using the Inventory of Problems-29 (IOP-29) with the Center for Epidemiologic Studies Depression Scale (CES-D) in Lithuania. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:155-162. [DOI: 10.1080/23279095.2020.1725518] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Guste Ilgunaite
- Department of Psychology, Mykolas Romeris University, Vilnius, Lithuania
| | | | - Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Donald J. Viglione
- California School of Professional Psychology, Alliant International University, San Diego, CA, USA
| | | |
Collapse
|
21
|
Liu J, Zhang J. An Item-Level Analysis for Detecting Faking on Personality Tests: Appropriateness of Ideal Point Item Response Theory Models. Front Psychol 2020; 10:3090. [PMID: 32038431 PMCID: PMC6987465 DOI: 10.3389/fpsyg.2019.03090] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 12/31/2019] [Indexed: 11/13/2022] Open
Abstract
How to detect faking on personality measures has been investigated using various methods and procedures. As previous findings are mixed and rarely based on ideal point item response theory models, additional research is needed for further exploration. This study modeled the responses of personality tests using ideal point method across instructed faking and honest responding conditions. A sample of undergraduate students participated the within-subjects measures to examine how the item location parameter derived from the generalized graded unfolding model changed, and how individuals’ perception about items changed when faked. The mean test scores of faking group was positively correlated to the magnitude of within-subjects score change. The item-level analysis revealed both conscientiousness items (18.8%) and neuroticism items (50.0%) appeared significant shifts on item parameters, suggesting that response pattern changed from honest to faking conditions. The direction of the change appeared both in positive and negative way, demonstrating that faking could increase or decrease personality factor scores. The results indicated that the changes of perceptions on items could be operated by faking, offering some support for the ideal point model to be an adequate measure for detecting faking. However, the findings of diagnostic accuracy analysis also implied that the appropriateness of ideal point models for detecting faking should be under consideration, also be used with caution. Implications, further research directions, and limitations are discussed.
Collapse
Affiliation(s)
- Jie Liu
- School of Mathematics and Statistics, Southwest University, Chongqing, China.,Faculty of Psychology, Southwest University, Chongqing, China
| | - Jinfu Zhang
- Faculty of Psychology, Southwest University, Chongqing, China
| |
Collapse
|
22
|
Mazza C, Orrù G, Burla F, Monaro M, Ferracuti S, Colasanti M, Roma P. Indicators to distinguish symptom accentuators from symptom producers in individuals with a diagnosed adjustment disorder: A pilot study on inconsistency subtypes using SIMS and MMPI-2-RF. PLoS One 2019; 14:e0227113. [PMID: 31887214 PMCID: PMC6936836 DOI: 10.1371/journal.pone.0227113] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 12/11/2019] [Indexed: 11/17/2022] Open
Abstract
In the context of legal damage evaluations, evaluees may exaggerate or simulate symptoms in an attempt to obtain greater economic compensation. To date, practitioners and researchers have focused on detecting malingering behavior as an exclusively unitary construct. However, we argue that there are two types of inconsistent behavior that speak to possible malingering-accentuating (i.e., exaggerating symptoms that are actually experienced) and simulating (i.e., fabricating symptoms entirely)-each with its own unique attributes; thus, it is necessary to distinguish between them. The aim of the present study was to identify objective indicators to differentiate symptom accentuators from symptom producers and consistent participants. We analyzed the Structured Inventory of Malingered Symptomatology scales and the Minnesota Multiphasic Personality Inventory-2 Restructured Form validity scales of 132 individuals with a diagnosed adjustment disorder with mixed anxiety and depressed mood who had undergone assessment for psychiatric/psychological damage. The results indicated that the SIMS Total Score, Neurologic Impairment and Low Intelligence scales and the MMPI-2-RF Infrequent Responses (F-r) and Response Bias (RBS) scales successfully discriminated among symptom accentuators, symptom producers, and consistent participants. Machine learning analysis was used to identify the most efficient parameter for classifying these three groups, recognizing the SIMS Total Score as the best indicator.
Collapse
Affiliation(s)
- Cristina Mazza
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Franco Burla
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padova, Padova, Italy
| | - Stefano Ferracuti
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Marco Colasanti
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
23
|
Pace G, Orrù G, Monaro M, Gnoato F, Vitaliani R, Boone KB, Gemignani A, Sartori G. Malingering Detection of Cognitive Impairment With the b Test Is Boosted Using Machine Learning. Front Psychol 2019; 10:1650. [PMID: 31396127 PMCID: PMC6664275 DOI: 10.3389/fpsyg.2019.01650] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 07/01/2019] [Indexed: 11/28/2022] Open
Abstract
Objective: Here we report an investigation on the accuracy of the b Test, a measure to identify malingering of cognitive symptoms, in detecting malingerers of mild cognitive impairment. Method: Three groups of participants, patients with Mild Neurocognitive Disorder (n = 21), healthy elders (controls, n = 21), and healthy elders instructed to simulate mild cognitive disorder (malingerers, n = 21) were administered two background neuropsychological tests (MMSE, FAB) as well as the b Test. Results: Malingerers performed significantly worse on all error scores as compared to patients and controls, and performed poorly than controls, but comparably to patients, on the time score. Patients performed significantly worse than controls on all scores, but both groups showed the same pattern of more omission than commission errors. By contrast, malingerers exhibited the opposite pattern with more commission errors than omission errors. Machine learning models achieve an overall accuracy higher than 90% in distinguishing patients from malingerers on the basis of b Test results alone. Conclusions: Our findings suggest that b Test error scores accurately distinguish patients with Mild Neurocognitive Disorder from malingerers and may complement other validated procedures such as the Medical Symptom Validity Test.
Collapse
Affiliation(s)
- Giorgia Pace
- Department of Psychology, University of Padova, Padova, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Merylin Monaro
- Department of Psychology, University of Padova, Padova, Italy
| | | | | | - Kyle B Boone
- Department of Psychiatry and Biobehavioral Sciences, UCLA School of Medicine, California School of Forensic Studies, Alliant International University, Alhambra, CA, United States
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | | |
Collapse
|
24
|
Zago S, Piacquadio E, Monaro M, Orrù G, Sampaolo E, Difonzo T, Toncini A, Heinzl E. The Detection of Malingered Amnesia: An Approach Involving Multiple Strategies in a Mock Crime. Front Psychiatry 2019; 10:424. [PMID: 31263432 PMCID: PMC6589901 DOI: 10.3389/fpsyt.2019.00424] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Accepted: 05/29/2019] [Indexed: 12/23/2022] Open
Abstract
The nature of amnesia in the context of crime has been the subject of a prolonged debate. It is not uncommon that after committing a violent crime, the offender either does not have any memory of the event or recalls it with some gaps in its recollection. A number of studies have been conducted in order to differentiate between simulated and genuine amnesia. The recognition of probable malingering requires several inferential methods. For instance, it typically involves the defendant's medical records, self-reports, the observed behavior, and the results of a comprehensive neuropsychological examination. In addition, a variety of procedures that may detect very specific malingered amnesia in crime have been developed. In this paper, we investigated the efficacy of three techniques, facial thermography, kinematic analysis, and symptom validity testing in detecting malingering of amnesia in crime. Participants were randomly assigned to two different experimental conditions: a group was instructed to simulate amnesia after a mock homicide, and a second group was simply asked to behave honestly after committing the mock homicide. The outcomes show that kinematic analysis and symptom validity testing achieve significant accuracy in detecting feigned amnesia, while thermal imaging does not provide converging evidence. Results are encouraging and may provide a first step towards the application of these procedures in a multimethod approach on crime-specific cases of amnesia.
Collapse
Affiliation(s)
- Stefano Zago
- U.O.C. Neurologia, IRCSS Fondazione Ospedale Maggiore Policlinico di Milano, Milano, Italy
| | - Emanuela Piacquadio
- U.O.C. Neurologia, IRCSS Fondazione Ospedale Maggiore Policlinico di Milano, Milano, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padova, Padova, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Erika Sampaolo
- U.O.C. Neurologia, IRCSS Fondazione Ospedale Maggiore Policlinico di Milano, Milano, Italy
- IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Teresa Difonzo
- U.O.C. Neurologia, IRCSS Fondazione Ospedale Maggiore Policlinico di Milano, Milano, Italy
| | - Andrea Toncini
- Department of General Psychology, University of Padova, Padova, Italy
| | - Eugenio Heinzl
- Dipartimento di Medicina Veterinaria, Università degli Studi di Milano, Milano, Italy
| |
Collapse
|
25
|
Demidova LY, Murphy L, Dwyer RG, Klapilová K, Fedoroff JP. International review of sexual behaviour assessment labs. Int Rev Psychiatry 2019; 31:114-125. [PMID: 30938553 DOI: 10.1080/09540261.2018.1559135] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
This article provides a comparison and comprehensive analysis of varied approaches to the assessment of sexual interest and behaviours at different international sexual behaviour assessment labs. The assessment protocols are described for four sexual behaviour laboratories: the Royal Ottawa Mental Health Centre's Sexual Behaviours Clinic in Canada; the Medical University of South Carolina's Sexual Behaviours Clinic and Laboratory in the US; the Laboratory of Evolutionary Sexology and Psychopathology in the Czech Republic; and the Laboratory of Forensic Sexology in Russia. An overview of examinee demographics and types of cases assessed is provided for each lab. Assessment protocols, including psychometric measures and objective measures of sexual interest and arousal, such as penile plethysmography or eye-tracking, are also reviewed. The differences across labs may lead to interesting and productive cross-cultural investigations and studies about the efficacy of specific assessment methods.
Collapse
Affiliation(s)
- Liubov Y Demidova
- a Laboratory of Forensic Sexology Department for Forensic Psychiatric Assessment in Criminal Proceedings , V. Serbsky National Medical Research Centre for Psychiatry and Narcology , Moscow , Russian Federation
| | - Lisa Murphy
- b Sexual Behaviours Clinic, Integrated Forensic Program , The Royal , Ottawa , ON , Canada
| | - R Gregg Dwyer
- c Sexual Behaviors Clinic & Lab, Community and Public Safety Psychiatry Division, Department of Psychiatry & Behavioral Sciences , Medical University of South Carolina , Charleston , SC , USA
| | - Katerina Klapilová
- d Laboratory of Evolutionary Sexology and Psychopathology , National Institute of Mental Health , Klecany , Czech Republic.,e Faculty of Humanities , Charles University , Prague , Czech Republic
| | - J Paul Fedoroff
- b Sexual Behaviours Clinic, Integrated Forensic Program , The Royal , Ottawa , ON , Canada.,f Department of Psychiatry , University of Ottawa , Ottawa , Canada
| |
Collapse
|
26
|
Mazza C, Monaro M, Orrù G, Burla F, Colasanti M, Ferracuti S, Roma P. Introducing Machine Learning to Detect Personality Faking-Good in a Male Sample: A New Model Based on Minnesota Multiphasic Personality Inventory-2 Restructured Form Scales and Reaction Times. Front Psychiatry 2019; 10:389. [PMID: 31275176 PMCID: PMC6593269 DOI: 10.3389/fpsyt.2019.00389] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 05/16/2019] [Indexed: 11/13/2022] Open
Abstract
Background and Purpose. The use of machine learning (ML) models in the detection of malingering has yielded encouraging results, showing promising accuracy levels. We investigated the possible application of this methodology when trained on behavioral features, such as response time (RT) and time pressure, to identify faking behavior in self-report personality questionnaires. To do so, we reintroduced the article of Roma et al. (2018), which highlighted that RTs and time pressure are useful variables in the detection of faking; we then extended the number of participants and applied an ML analysis. Materials and Methods. The sample was composed of 175 subjects, of whom all were graduates (having completed at least 17 years of instruction), male, and Caucasian. Subjects were randomly assigned to four groups: honest speeded, faking-good speeded, honest unspeeded, and faking-good unspeeded. A software version of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) was administered. Results. Results indicated that ML algorithms reached very high accuracies (around 95%) in detecting malingerers when subjects are instructed to respond under time pressure. The classifiers' performance was lower when the subjects responded with no time restriction to the MMPI-2-RF items, with accuracies ranging from 75% to 85%. Further analysis demonstrated that T-scores of validity scales are ineffective to detect fakers when participants were not under temporal pressure (accuracies 55-65%), whereas temporal features resulted to be more useful (accuracies 70-75%). By contrast, temporal features and T-scores of validity scales are equally effective in detecting fakers when subjects are under time pressure (accuracies higher than 90%). Discussion. To conclude, results demonstrated that ML techniques are extremely valuable and reach high performance in detecting fakers in self-report personality questionnaires over more the traditional psychometric techniques. Validity scales MMPI-2-RF manual criteria are very poor in identifying under-reported profiles. Moreover, temporal measures are useful tools in distinguishing honest from dishonest responders, especially in a no time pressure condition. Indeed, time pressure brings out malingerers in clearer way than does no time pressure condition.
Collapse
Affiliation(s)
- Cristina Mazza
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Franco Burla
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Marco Colasanti
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferracuti
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
27
|
Walczyk JJ, Sewell N, DiBenedetto MB. A Review of Approaches to Detecting Malingering in Forensic Contexts and Promising Cognitive Load-Inducing Lie Detection Techniques. Front Psychiatry 2018; 9:700. [PMID: 30622488 PMCID: PMC6308182 DOI: 10.3389/fpsyt.2018.00700] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 12/03/2018] [Indexed: 12/04/2022] Open
Abstract
Malingering, the feigning of psychological or physical ailment for gain, imposes high costs on society, especially on the criminal-justice system. In this article, we review some of the costs of malingering in forensic contexts. Then the most common methods of malingering detection are reviewed, including those for feigned psychiatric and cognitive impairments. The shortcomings of each are considered. The article continues with a discussion of commonly used means for detecting deception. Although not traditionally used to uncover malingering, new, innovative methods are emphasized that attempt to induce greater cognitive load on liars than truth tellers, some informed by theoretical accounts of deception. As a type of deception, we argue that such cognitive approaches and theoretical understanding can be adapted to the detection of malingering to supplement existing methods.
Collapse
Affiliation(s)
- Jeffrey J. Walczyk
- Psychology and Behavioral Sciences, Louisiana Tech University, Ruston, LA, United States
| | | | | |
Collapse
|