1
|
Melis G, Ursino M, Scarpazza C, Zangrossi A, Sartori G. Detecting lies in investigative interviews through the analysis of response latencies and error rates to unexpected questions. Sci Rep 2024; 14:12268. [PMID: 38806588 PMCID: PMC11133341 DOI: 10.1038/s41598-024-63156-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 05/25/2024] [Indexed: 05/30/2024] Open
Abstract
In this study, we propose an approach to detect deception during investigative interviews by integrating response latency and error analysis with the unexpected question technique. Sixty participants were assigned to an honest (n = 30) or deceptive group (n = 30). The deceptive group was instructed to memorize the false biographical details of a fictitious identity. Throughout the interviews, participants were presented with a randomized sequence of control, expected, and unexpected open-ended questions about identity. Responses were audio recorded for detailed examination. Our findings indicate that deceptive participants showed markedly longer latencies and higher error rates when answering expected (requiring deception) and unexpected questions (for which premeditated deception was not possible). Longer response latencies were also observed in participants attempting deception when answering control questions (which necessitated truthful answers). Moreover, a within-subject analysis highlighted that responding to unexpected questions significantly impaired individuals' performance compared to answering control and expected questions. Leveraging machine-learning algorithms, our approach attained a classification accuracy of 98% in distinguishing deceptive and honest participants. Additionally, a classification analysis on single response levels was conducted. Our findings underscore the effectiveness of merging response latency metrics and error rates with unexpected questioning as a robust method for identity deception detection in investigative interviews. We also discuss significant implications for enhancing interview strategies.
Collapse
Affiliation(s)
- Giulia Melis
- Department of General Psychology, University of Padua, Padova, Italy.
- Human Inspired Technology Research Centre, University of Padua, Padova, Italy.
| | - Martina Ursino
- Department of General Psychology, University of Padua, Padova, Italy
| | - Cristina Scarpazza
- Department of General Psychology, University of Padua, Padova, Italy
- Translational Neuroimaging and Cognitive Lab, IRCCS San Camillo Hospital, Venice, Italy
| | - Andrea Zangrossi
- Department of General Psychology, University of Padua, Padova, Italy
- Padova Neuroscience Center (PNC), University of Padua, Padova, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padova, Italy
| |
Collapse
|
2
|
Loconte R, Russo R, Capuozzo P, Pietrini P, Sartori G. Verbal lie detection using Large Language Models. Sci Rep 2023; 13:22849. [PMID: 38129677 PMCID: PMC10739834 DOI: 10.1038/s41598-023-50214-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 12/16/2023] [Indexed: 12/23/2023] Open
Abstract
Human accuracy in detecting deception with intuitive judgments has been proven to not go above the chance level. Therefore, several automatized verbal lie detection techniques employing Machine Learning and Transformer models have been developed to reach higher levels of accuracy. This study is the first to explore the performance of a Large Language Model, FLAN-T5 (small and base sizes), in a lie-detection classification task in three English-language datasets encompassing personal opinions, autobiographical memories, and future intentions. After performing stylometric analysis to describe linguistic differences in the three datasets, we tested the small- and base-sized FLAN-T5 in three Scenarios using 10-fold cross-validation: one with train and test set coming from the same single dataset, one with train set coming from two datasets and the test set coming from the third remaining dataset, one with train and test set coming from all the three datasets. We reached state-of-the-art results in Scenarios 1 and 3, outperforming previous benchmarks. The results revealed also that model performance depended on model size, with larger models exhibiting higher performance. Furthermore, stylometric analysis was performed to carry out explainability analysis, finding that linguistic features associated with the Cognitive Load framework may influence the model's predictions.
Collapse
Affiliation(s)
- Riccardo Loconte
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, Piazza San Francesco 19, 55100, Lucca, LU, Italy.
| | - Roberto Russo
- Department of Mathematics "Tullio Levi-Civita", University of Padova, Padova, Italy
| | - Pasquale Capuozzo
- Department of General Psychology, University of Padova, Padova, Italy
| | - Pietro Pietrini
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, Piazza San Francesco 19, 55100, Lucca, LU, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padova, Padova, Italy
| |
Collapse
|
3
|
Asonov D, Krylov M, Omelyusik V, Ryabikina A, Litvinov E, Mitrofanov M, Mikhailov M, Efimov A. Building a second-opinion tool for classical polygraph. Sci Rep 2023; 13:5522. [PMID: 37069221 PMCID: PMC10110587 DOI: 10.1038/s41598-023-31775-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 03/16/2023] [Indexed: 04/19/2023] Open
Abstract
Classical polygraph screenings are routinely used by critical businesses such as banking, law enforcement agencies, and federal governments. A major concern of scientific communities is that screenings are prone to errors. However, screening errors are not only due to the method, but also due to human (polygraph examiner) error. Here we show application of machine learning (ML) to detect examiner errors. From an ML perspective, we trained an error detection model in the absence of labeled errors. From a practical perspective, we devised and tested successfully a second-opinion tool to find human errors in examiners' conclusions, thus reducing subjectivity of polygraph screenings. We report novel features that uplift the model's accuracy, and experimental results on whether people lie differently on different topics. We anticipate our results to be a step towards rethinking classical polygraph practices.
Collapse
Affiliation(s)
- Dmitri Asonov
- Sber Innovation and Research, Sberbank of Russia, Moscow, 117997, Russian Federation
| | - Maksim Krylov
- Internal Security Department, Sberbank of Russia, Moscow, 117997, Russian Federation.
| | - Vladimir Omelyusik
- Sber Innovation and Research, Sberbank of Russia, Moscow, 117997, Russian Federation
| | - Anastasiya Ryabikina
- Internal Security Department, Sberbank of Russia, Moscow, 117997, Russian Federation
| | - Evgeny Litvinov
- Sber Innovation and Research, Sberbank of Russia, Moscow, 117997, Russian Federation
| | - Maksim Mitrofanov
- Internal Security Department, Sberbank of Russia, Moscow, 117997, Russian Federation
| | - Maksim Mikhailov
- Internal Security Department, Sberbank of Russia, Moscow, 117997, Russian Federation
| | - Albert Efimov
- Sber Innovation and Research, Sberbank of Russia, Moscow, 117997, Russian Federation
- University of Science and Technology MISIS, Moscow, 119049, Russian Federation
| |
Collapse
|
4
|
Spagnolli A, Masotina M, Furlan M, Pluchino P, Martinelli M, Gamberini L. Sharing the Space With the "Victim" Can Increase Help Rates. A Study With Virtual Reality. Front Psychol 2021; 12:729077. [PMID: 34566815 PMCID: PMC8455842 DOI: 10.3389/fpsyg.2021.729077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 08/19/2021] [Indexed: 11/15/2022] Open
Abstract
A typical protocol for the psychological study of helping behavior features two core roles: a help seeker suffering from some personal or situational emergency (often called “victim”) and a potential helper. The setting of these studies is such that the victim and the helper often share the same space. We wondered whether this spatial arrangement might affect the help rate. Thus, we designed a simple study with virtual reality in which space sharing could be manipulated. The participant plays the role of a potential helper; the victim is a humanoid located inside the virtual building. When the request for help is issued, the participant can be either in the same spatial region as the victim (the virtual building) or outside it. The effect of space was tested in two kinds of emergencies: a mere request for help and a request for help during a fire. The analysis shows that, in both kinds of emergencies, the participants were more likely to help the victim when sharing the space with it. This study suggests controlling the spatial arrangement when investigating helping behavior. It also illustrates the expediency of virtual reality to further investigate the role of space on pro-social behavior during emergencies.
Collapse
Affiliation(s)
- Anna Spagnolli
- Department of General Psychology, University of Padova, Padua, Italy.,Human Inspired Technologies Research Centre, University of Padova, Padua, Italy
| | - Mariavittoria Masotina
- Department of General Psychology, University of Padova, Padua, Italy.,Human Inspired Technologies Research Centre, University of Padova, Padua, Italy
| | - Mattia Furlan
- Department of General Psychology, University of Padova, Padua, Italy.,Human Inspired Technologies Research Centre, University of Padova, Padua, Italy
| | - Patrik Pluchino
- Department of General Psychology, University of Padova, Padua, Italy.,Human Inspired Technologies Research Centre, University of Padova, Padua, Italy
| | | | - Luciano Gamberini
- Department of General Psychology, University of Padova, Padua, Italy.,Human Inspired Technologies Research Centre, University of Padova, Padua, Italy
| |
Collapse
|
5
|
Mazzuca C, Benassi M, Nicoletti R, Sartori G, Lugli L. Assessing the impact of previous experience on lie effects through a transfer paradigm. Sci Rep 2021; 11:8961. [PMID: 33903680 PMCID: PMC8076267 DOI: 10.1038/s41598-021-88387-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 03/30/2021] [Indexed: 11/29/2022] Open
Abstract
Influential lines of research propose dual processes-based explanations to account for both the cognitive cost implied in lying and for that entailed in the resolution of the conflict posited by Simon tasks. The emergence and consistency of the Simon effect has been proved to be modulated by both practice effects and transfer effects. Although several studies provided evidence that the lying cognitive demand may vary as a function of practice, whether and how transfer effects could also play a role remains an open question. We addressed this question with one experiment in which participants completed a Differentiation of Deception Paradigm twice (baseline and test sessions). Crucially, between the baseline and the test sessions, participants performed a training session consisting in a spatial compatibility task with incompatible (condition 1) or compatible (condition 2) mapping, a non-spatial task (condition 3) and a no task one (condition 4). Results speak in favour of a modulation of individual performances by means of an immediate prior experience, and specifically with an incompatible spatial training.
Collapse
Affiliation(s)
- Claudia Mazzuca
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK.
| | | | - Roberto Nicoletti
- Department of Philosophy and Communication, University of Bologna, Via A. Gardino, 23, 40122, Bologna, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| | - Luisa Lugli
- Department of Philosophy and Communication, University of Bologna, Via A. Gardino, 23, 40122, Bologna, Italy.
| |
Collapse
|
6
|
Tomas F, Tsimperidis I, Demarchi S, El Massioui F. Keyboard dynamics discrepancies between baseline and deceptive eyewitness narratives. APPLIED COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1002/acp.3743] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Frédéric Tomas
- Human and Artificial Cognitions Laboratory, Department of Psychology University Paris 8 Saint‐Denis France
| | - Ioannis Tsimperidis
- Department of Electrical and Computer Engineering Democritus University of Thrace Komotini Greece
| | - Samuel Demarchi
- Human and Artificial Cognitions Laboratory, Department of Psychology University Paris 8 Saint‐Denis France
| | - Farid El Massioui
- Human and Artificial Cognitions Laboratory, Department of Psychology University Paris 8 Saint‐Denis France
| |
Collapse
|
7
|
The detection of faked identity using unexpected questions and choice reaction times. PSYCHOLOGICAL RESEARCH 2020; 85:2474-2482. [PMID: 32886169 PMCID: PMC8357779 DOI: 10.1007/s00426-020-01410-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 08/25/2020] [Indexed: 11/22/2022]
Abstract
The identification of faked identities, especially within the Internet environment, still remains a challenging issue both for companies and researchers. Recently, however, latency-based lie detection techniques have been developed to evaluate whether the respondent is the real owner of a certain identity. Among the paradigms applied to this purpose, the technique of asking unexpected questions has proved to be useful to differentiate liars from truth-tellers. The aim of the present study was to assess whether a choice reaction times (RT) paradigm, combined with the unexpected question technique, could efficiently detect identity liars. Results demonstrate that the most informative feature in distinguishing liars from truth-tellers is the Inverse Efficiency Score (IES, an index that combines speed and accuracy) to unexpected questions. Moreover, to focus on the predictive power of the technique, machine-learning models were trained and tested, obtaining an out-of-sample classification accuracy of 90%. Overall, these findings indicate that it is possible to detect liars declaring faked identities by asking unexpected questions and measuring RTs and errors, with an accuracy comparable to that of well-established latency-based techniques, such as mouse and keystroke dynamics recording.
Collapse
|
8
|
Monaro M, Cannonito E, Gamberini L, Sartori G. Spotting faked 5 stars ratings in E-Commerce using mouse dynamics. COMPUTERS IN HUMAN BEHAVIOR 2020. [DOI: 10.1016/j.chb.2020.106348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
9
|
Cartwright A, Donkin R. Knowledge of Depression and Malingering: An Exploratory Investigation. EUROPES JOURNAL OF PSYCHOLOGY 2020; 16:32-44. [PMID: 33680168 PMCID: PMC7913031 DOI: 10.5964/ejop.v16i1.1730] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 06/12/2019] [Indexed: 11/20/2022]
Abstract
Malingering mental disorder for financial compensation can offer substantial rewards to those willing to do so. A recent review of UK medico-legal experts' practices for detecting claimants evidenced that they are not well equipped to detect those that do. This is not surprising, considering that very little is known regarding why individuals opt to malinger. A potential construct which may influence an individual's choice to malinger is their knowledge of the disorder, and when one considers the high levels of depression literacy within the UK, it is imperative that this hypothesis is investigated. A brief depression knowledge scale was devised and administered to undergraduate students (N = 155) alongside a series of questions exploring how likely participants were to malinger in both workplace stress and claiming for benefit vignettes. Depression knowledge did not affect the likelihood of engaging in any malingering strategy in either the workplace stress vignettes or the benefit claimant vignettes. Differences were found between the two vignettes providing evidence for the context-specific nature of malingering, and an individual's previous mental disorder was also influential.
Collapse
Affiliation(s)
- Ashley Cartwright
- Behavioural Sciences, School of Human and Health Sciences, University of Huddersfield, Huddersfield, United Kingdom
| | - Rebecca Donkin
- Department of Psychology, Leeds Trinity University, Leeds, United Kingdom
| |
Collapse
|
10
|
Orrù G, Monaro M, Conversano C, Gemignani A, Sartori G. Machine Learning in Psychometrics and Psychological Research. Front Psychol 2020; 10:2970. [PMID: 31998200 PMCID: PMC6966768 DOI: 10.3389/fpsyg.2019.02970] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 12/16/2019] [Indexed: 11/28/2022] Open
Abstract
Recent controversies about the level of replicability of behavioral research analyzed using statistical inference have cast interest in developing more efficient techniques for analyzing the results of psychological experiments. Here we claim that complementing the analytical workflow of psychological experiments with Machine Learning-based analysis will both maximize accuracy and minimize replicability issues. As compared to statistical inference, ML analysis of experimental data is model agnostic and primarily focused on prediction rather than inference. We also highlight some potential pitfalls resulting from adoption of Machine Learning based experiment analysis. If not properly used it can lead to over-optimistic accuracy estimates similarly observed using statistical inference. Remedies to such pitfalls are also presented such and building model based on cross validation and the use of ensemble models. ML models are typically regarded as black boxes and we will discuss strategies aimed at rendering more transparent the predictions.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Ciro Conversano
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
11
|
Mazza C, Monaro M, Orrù G, Burla F, Colasanti M, Ferracuti S, Roma P. Introducing Machine Learning to Detect Personality Faking-Good in a Male Sample: A New Model Based on Minnesota Multiphasic Personality Inventory-2 Restructured Form Scales and Reaction Times. Front Psychiatry 2019; 10:389. [PMID: 31275176 PMCID: PMC6593269 DOI: 10.3389/fpsyt.2019.00389] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 05/16/2019] [Indexed: 11/13/2022] Open
Abstract
Background and Purpose. The use of machine learning (ML) models in the detection of malingering has yielded encouraging results, showing promising accuracy levels. We investigated the possible application of this methodology when trained on behavioral features, such as response time (RT) and time pressure, to identify faking behavior in self-report personality questionnaires. To do so, we reintroduced the article of Roma et al. (2018), which highlighted that RTs and time pressure are useful variables in the detection of faking; we then extended the number of participants and applied an ML analysis. Materials and Methods. The sample was composed of 175 subjects, of whom all were graduates (having completed at least 17 years of instruction), male, and Caucasian. Subjects were randomly assigned to four groups: honest speeded, faking-good speeded, honest unspeeded, and faking-good unspeeded. A software version of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) was administered. Results. Results indicated that ML algorithms reached very high accuracies (around 95%) in detecting malingerers when subjects are instructed to respond under time pressure. The classifiers' performance was lower when the subjects responded with no time restriction to the MMPI-2-RF items, with accuracies ranging from 75% to 85%. Further analysis demonstrated that T-scores of validity scales are ineffective to detect fakers when participants were not under temporal pressure (accuracies 55-65%), whereas temporal features resulted to be more useful (accuracies 70-75%). By contrast, temporal features and T-scores of validity scales are equally effective in detecting fakers when subjects are under time pressure (accuracies higher than 90%). Discussion. To conclude, results demonstrated that ML techniques are extremely valuable and reach high performance in detecting fakers in self-report personality questionnaires over more the traditional psychometric techniques. Validity scales MMPI-2-RF manual criteria are very poor in identifying under-reported profiles. Moreover, temporal measures are useful tools in distinguishing honest from dishonest responders, especially in a no time pressure condition. Indeed, time pressure brings out malingerers in clearer way than does no time pressure condition.
Collapse
Affiliation(s)
- Cristina Mazza
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Franco Burla
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Marco Colasanti
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferracuti
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
12
|
Unanticipated questions can yield unanticipated outcomes in investigative interviews. PLoS One 2018; 13:e0208751. [PMID: 30532180 PMCID: PMC6285978 DOI: 10.1371/journal.pone.0208751] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 11/21/2018] [Indexed: 11/20/2022] Open
Abstract
Asking unanticipated questions in investigative interviews can elicit differences in the verbal behaviour of truth-tellers and liars: When faced with unanticipated questions, liars give less detailed and consistent responses than truth-tellers. Do such differences in verbal behaviour lead to an improvement in the accuracy of interviewers’ veracity judgements? Two empirical studies evaluated the efficacy of the unanticipated questions technique. Experiment 1 compared two types of unanticipated questions (questions regarding the planning of a task and questions regarding the specific spatial and temporal details associated with the task), assessing the veracity judgements of interviewers and verbal content of interviewees’ responses. Experiment 2 assessed veracity judgements of independent observers. Overall, the results provide little support for the technique. For interviewers, unanticipated questions failed to improve veracity judgement accuracy above chance. Reality monitoring analysis revealed qualitatively distinct information in the responses to the two unanticipated question types, though little distinction between the responses of truth-tellers and liars. Accuracy for observers was greater when judging transcripts of unanticipated questions, and this effect was stronger for spatial and temporal questions than planning questions. The benefits of unanticipated questioning appear limited to post-interview situations. Furthermore, the type of unanticipated question affects both the type of information gathered and the ability to detect deceit.
Collapse
|
13
|
Monaro M, Gamberini L, Zecchinato F, Sartori G. False Identity Detection Using Complex Sentences. Front Psychol 2018; 9:283. [PMID: 29559945 PMCID: PMC5845552 DOI: 10.3389/fpsyg.2018.00283] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Accepted: 02/20/2018] [Indexed: 11/13/2022] Open
Abstract
The use of faked identities is a current issue for both physical and online security. In this paper, we test the differences between subjects who report their true identity and the ones who give fake identity responding to control, simple, and complex questions. Asking complex questions is a new procedure for increasing liars' cognitive load, which is presented in this paper for the first time. The experiment consisted in an identity verification task, during which response time and errors were collected. Twenty participants were instructed to lie about their identity, whereas the other 20 were asked to respond truthfully. Different machine learning (ML) models were trained, reaching an accuracy level around 90–95% in distinguishing liars from truth tellers based on error rate and response time. Then, to evaluate the generalization and replicability of these models, a new sample of 10 participants were tested and classified, obtaining an accuracy between 80 and 90%. In short, results indicate that liars may be efficiently distinguished from truth tellers on the basis of their response times and errors to complex questions, with an adequate generalization accuracy of the classification models.
Collapse
Affiliation(s)
- Merylin Monaro
- Human Inspired Technology Research Centre, University of Padova, Padova, Italy
| | - Luciano Gamberini
- Human Inspired Technology Research Centre, University of Padova, Padova, Italy.,Department of General Psychology, University of Padova, Padova, Italy
| | | | - Giuseppe Sartori
- Department of General Psychology, University of Padova, Padova, Italy
| |
Collapse
|
14
|
Monaro M, Toncini A, Ferracuti S, Tessari G, Vaccaro MG, De Fazio P, Pigato G, Meneghel T, Scarpazza C, Sartori G. The Detection of Malingering: A New Tool to Identify Made-Up Depression. Front Psychiatry 2018; 9:249. [PMID: 29937740 PMCID: PMC6002526 DOI: 10.3389/fpsyt.2018.00249] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2018] [Accepted: 05/23/2018] [Indexed: 11/17/2022] Open
Abstract
Major depression is a high-prevalence mental disease with major socio-economic impact, for both the direct and the indirect costs. Major depression symptoms can be faked or exaggerated in order to obtain economic compensation from insurance companies. Critically, depression is potentially easily malingered, as the symptoms that characterize this psychiatric disorder are not difficult to emulate. Although some tools to assess malingering of psychiatric conditions are already available, they are principally based on self-reporting and are thus easily faked. In this paper, we propose a new method to automatically detect the simulation of depression, which is based on the analysis of mouse movements while the patient is engaged in a double-choice computerized task, responding to simple and complex questions about depressive symptoms. This tool clearly has a key advantage over the other tools: the kinematic movement is not consciously controllable by the subjects, and thus it is almost impossible to deceive. Two groups of subjects were recruited for the study. The first one, which was used to train different machine-learning algorithms, comprises 60 subjects (20 depressed patients and 40 healthy volunteers); the second one, which was used to test the machine-learning models, comprises 27 subjects (9 depressed patients and 18 healthy volunteers). In both groups, the healthy volunteers were randomly assigned to the liars and truth-tellers group. Machine-learning models were trained on mouse dynamics features, which were collected during the subject response, and on the number of symptoms reported by participants. Statistical results demonstrated that individuals that malingered depression reported a higher number of depressive and non-depressive symptoms than depressed participants, whereas individuals suffering from depression took more time to perform the mouse-based tasks compared to both truth-tellers and liars. Machine-learning models reached a classification accuracy up to 96% in distinguishing liars from depressed patients and truth-tellers. Despite this, the data are not conclusive, as the accuracy of the algorithm has not been compared with the accuracy of the clinicians; this study presents a possible useful method that is worth further investigation.
Collapse
Affiliation(s)
- Merylin Monaro
- Department of General Psychology, University of Padova, Padova, Italy
| | - Andrea Toncini
- Department of General Psychology, University of Padova, Padova, Italy
| | - Stefano Ferracuti
- Department of Human Neurosciences, University of Roma "La Sapienza", Rome, Italy
| | - Gianmarco Tessari
- Department of Human Neurosciences, University of Roma "La Sapienza", Rome, Italy
| | - Maria G Vaccaro
- Neuroscience Center, Department of Medical and Surgical Science, University "Magna Graecia", Catanzaro, Italy
| | - Pasquale De Fazio
- Department of Psychiatry, University "Magna Graecia", Catanzaro, Italy
| | - Giorgio Pigato
- Psychiatry Unit, Azienda Ospedaliera di Padova, Padova Hospital, Padova, Italy
| | - Tiziano Meneghel
- Dipartimento di Salute Mentale, Azienda Unità Locale Socio Sanitaria 9, Treviso, Italy
| | | | - Giuseppe Sartori
- Department of General Psychology, University of Padova, Padova, Italy
| |
Collapse
|