1
|
Spaulding TJ, Blewitt A. The perceived guilt and innocence of adults with developmental language disorder and adults with typical language during a mock interrogation. JOURNAL OF COMMUNICATION DISORDERS 2024; 110:106429. [PMID: 38754317 DOI: 10.1016/j.jcomdis.2024.106429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 04/14/2024] [Accepted: 04/30/2024] [Indexed: 05/18/2024]
Abstract
PURPOSE This study examined if there were differences in the guilty and not guilty judgments of adults with developmental language disorder (DLD) and those with typical language (TL) functioning. METHOD Twenty-four adults (12 DLD, 12 TL) were assigned to either the guilty or not guilty conditions. Those in the guilty condition engaged in a mock crime while those in the not guilty condition were informed that a crime had been committed. Peer jurors were presented with video interrogations of the DLD (6 guilty, 6 not guilty) and TL (6 guilty, 6 not guilty) participants and were asked to make categorical judgments of guilty and not guilty and to indicate confidence in their judgments. RESULTS In general, peer jurors were not accurate in their judgments of the accused, and were more likely to judge individuals with DLD as guilty relative to accused individuals with TL. Peer jurors were particularly poor at judging innocent adults with DLD as not guilty and guilty adults with TL as guilty. Despite this, peer jurors were more confident than not in their guilty and not guilty determinations. CONCLUSIONS Peer jurors are confident in their judgments of the guilt of the accused when they should not be, particularly in the case of accused adults with DLD. Implications are discussed.
Collapse
Affiliation(s)
- Tammie J Spaulding
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, United States.
| | - Audra Blewitt
- Department of Speech, Language, and Hearing Sciences, Purdue University, Bloomington, IN, United States
| |
Collapse
|
2
|
Zhao J, Fu C. Linguistic indicators for predicting the veracity of online health rumors. Front Public Health 2024; 11:1278503. [PMID: 38269391 PMCID: PMC10806107 DOI: 10.3389/fpubh.2023.1278503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/04/2023] [Indexed: 01/26/2024] Open
Abstract
This study aims to examine the role of language in discerning the authenticity of online health rumors. To achieve this goal, it specifically focuses on analyzing five categories of linguistic indicators: (1) emotional language characterized by sentiment words, sensory words, and continuous punctuations, (2) exaggerated language defined by the presence of extreme numbers and extreme adverbs, (3) personalized language denoted by first-person pronouns, (4) unprofessional language represented by typographical errors, and (5) linkage language marked by inclusion of hyperlinks. To conduct the investigation, a dataset consisting of 1,500 information items was utilized. The dataset exhibited a distribution pattern wherein 20% of the information was verified to be true, while the remaining 80% was categorized as rumors. These items were sourced from two prominent rumor-clarification websites in China. A binomial logistic regression was used for data analysis to determine whether the language used in an online health rumor could predict its authenticity. The results of the analysis showed that the presence of sentiment words, continuous punctuation marks, extreme numbers and adverbs in an online health rumor could predict its authenticity. Personalized language, typographical errors, and hyperlinks were also found to be useful indicators for identifying health rumors using linguistic indicators. These results provide valuable insights for identifying health rumors using language-based features and could help individuals and organizations better understand the credibility of online health information.
Collapse
Affiliation(s)
- Jingyi Zhao
- College of International Studies, Southwest University, Chongqing, China
| | - Cun Fu
- School of Foreign Languages and Cultures, Chongqing University, Chongqing, China
| |
Collapse
|
3
|
Loconte R, Russo R, Capuozzo P, Pietrini P, Sartori G. Verbal lie detection using Large Language Models. Sci Rep 2023; 13:22849. [PMID: 38129677 PMCID: PMC10739834 DOI: 10.1038/s41598-023-50214-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 12/16/2023] [Indexed: 12/23/2023] Open
Abstract
Human accuracy in detecting deception with intuitive judgments has been proven to not go above the chance level. Therefore, several automatized verbal lie detection techniques employing Machine Learning and Transformer models have been developed to reach higher levels of accuracy. This study is the first to explore the performance of a Large Language Model, FLAN-T5 (small and base sizes), in a lie-detection classification task in three English-language datasets encompassing personal opinions, autobiographical memories, and future intentions. After performing stylometric analysis to describe linguistic differences in the three datasets, we tested the small- and base-sized FLAN-T5 in three Scenarios using 10-fold cross-validation: one with train and test set coming from the same single dataset, one with train set coming from two datasets and the test set coming from the third remaining dataset, one with train and test set coming from all the three datasets. We reached state-of-the-art results in Scenarios 1 and 3, outperforming previous benchmarks. The results revealed also that model performance depended on model size, with larger models exhibiting higher performance. Furthermore, stylometric analysis was performed to carry out explainability analysis, finding that linguistic features associated with the Cognitive Load framework may influence the model's predictions.
Collapse
Affiliation(s)
- Riccardo Loconte
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, Piazza San Francesco 19, 55100, Lucca, LU, Italy.
| | - Roberto Russo
- Department of Mathematics "Tullio Levi-Civita", University of Padova, Padova, Italy
| | - Pasquale Capuozzo
- Department of General Psychology, University of Padova, Padova, Italy
| | - Pietro Pietrini
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, Piazza San Francesco 19, 55100, Lucca, LU, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padova, Padova, Italy
| |
Collapse
|
4
|
Bond GD, Speller LF, Cockrell LL, Webb KG, Sievers JL. 'Sleepy Joe' and 'Donald, King of Whoppers': Reality Monitoring and Verbal Deception in the 2020 U.S. Presidential Election Debates. Psychol Rep 2023; 126:3090-3103. [PMID: 35634896 DOI: 10.1177/00332941221105212] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
The 2020 U.S. Presidential election was a campaign that could be characterized as 'one of the nastiest presidential campaigns in recent memory,' partly because the general election debates were highly contentious and featured frequent interruptions and several insults and invectives between candidates. This research compared the language used in the debates to fact-checked truths and lies using a Reality Monitoring (RM) deception detection algorithm in Linguistic Inquiry and Word Count (LIWC) to investigate the veracity of real-life high-stakes verbal messages in the political context. We found that overall RM scores were lower and not significantly different between debate language and fact-checked lies, and RM scores were significantly higher in fact-checked truth statements, indicating that most debate language uttered was deceptive. This result supports the finding that the RM algorithm in LIWC distinguishes truth from lies and debate language in the context of politics. The 60.7% classification rate in this study may reflect a problem with the relatively short word counts of fact-checked lie and truth statements, but most probably reflects individual candidates' deviations in RM features used in their statements. Each individual has a style that they use in communication-'the way people talk and write have been recognized as stamps of individual identity.' Even with a corpus of many statements from the same individual candidates, they probably regularly amplify certain features of RM and diminish other features of RM in their truthful and deceptive messages. This is a fruitful area of research that could be explored in future studies.
Collapse
Affiliation(s)
- Gary D Bond
- Department of Psychology, Eastern New Mexico University, Portales, NM, USA
| | - Lassiter F Speller
- Department of Psychology, Eastern New Mexico University, Portales, NM, USA
| | - Lauren L Cockrell
- Department of Psychology, Eastern New Mexico University, Portales, NM, USA
| | - Katelynn G Webb
- Department of Psychology, Eastern New Mexico University, Portales, NM, USA
| | - Jaci L Sievers
- Department of Psychology, Eastern New Mexico University, Portales, NM, USA
| |
Collapse
|
5
|
Sprigings S, Brown CJV, ten Brinke L. Deception is associated with reduced social connection. COMMUNICATIONS PSYCHOLOGY 2023; 1:19. [PMID: 38665248 PMCID: PMC11041808 DOI: 10.1038/s44271-023-00021-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/29/2023] [Indexed: 04/28/2024]
Abstract
Lies can have major consequences if undetected. Research to date has focused primarily on the consequences of deception for receivers once lies are discovered. We advance deception research and relationship science by studying the social consequences of deception for the sender-even if their lies remain undetected. In a correlational study of video conversations (Study 1; N = 776), an experimental study of text conversations (Study 2; N = 416), and a survey of dispositional tendencies (Study 3; N = 399), we find consistent evidence that people who lie tend to assume that others are lying too, and this impedes their ability to form social connections. The findings provide insight into how (dis)honesty and loneliness may go together, and suggest that lies-even when undetected-harm our relationships.
Collapse
Affiliation(s)
- Samantha Sprigings
- Department of Psychology, University of British Columbia Okanagan, Kelowna, BC Canada
| | - Cameo J. V. Brown
- Department of Psychology, University of British Columbia Okanagan, Kelowna, BC Canada
| | - Leanne ten Brinke
- Department of Psychology, University of British Columbia Okanagan, Kelowna, BC Canada
| |
Collapse
|
6
|
Sporer SL, Masip J. Millennia of legal content criteria of lies and truths: wisdom or common-sense folly? Front Psychol 2023; 14:1219995. [PMID: 37771816 PMCID: PMC10524244 DOI: 10.3389/fpsyg.2023.1219995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 08/21/2023] [Indexed: 09/30/2023] Open
Abstract
Long before experimental psychology, religious writers, orators, and playwrights described examples of lie detection based on the verbal content of statements. Legal scholars collected evidence from individual cases and systematized them as "rules of evidence". Some of these resemble content cues used in contemporary research, while others point to working hypotheses worth exploring. To examine their potential validity, we re-analyzed data from a quasi-experimental study of 95 perjury cases. The outcomes support the fruitfulness of this approach. Travelling back in time searching for testable ideas about content cues to truth and deception may be worthwhile.
Collapse
Affiliation(s)
- Siegfried L. Sporer
- Department of Psychology and Sports Science, Justus-Liebig-University of Giessen, Giessen, Germany
| | - Jaume Masip
- Department of Social Psychology and Anthropology, Universidad de Salamanca, Salamanca, Spain
| |
Collapse
|
7
|
Markowitz DM, Levine TR, Serota KB, Moore AD. Cross-checking journalistic fact-checkers: The role of sampling and scaling in interpreting false and misleading statements. PLoS One 2023; 18:e0289004. [PMID: 37490489 PMCID: PMC10368232 DOI: 10.1371/journal.pone.0289004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 07/01/2023] [Indexed: 07/27/2023] Open
Abstract
Professional fact-checkers and fact-checking organizations provide a critical public service. Skeptics of modern media, however, often question the accuracy and objectivity of fact-checkers. The current study assessed agreement among two independent fact-checkers, The Washington Post and PolitiFact, regarding the false and misleading statements of then President Donald J. Trump. Differences in statement selection and deceptiveness scaling were investigated. The Washington Post checked PolitiFact fact-checks 77.4% of the time (22.6% selection disagreement). Moderate agreement was observed for deceptiveness scaling. Nearly complete agreement was observed for bottom-line attributed veracity. Additional cross-checking with other sources (Snopes, FactCheck.org), original sources, and with fact-checking for the first 100 days of President Joe Biden's administration were inconsistent with potential ideology effects. Our evidence suggests fact-checking is a difficult enterprise, there is considerable variability between fact-checkers in the raw number of statements that are checked, and finally, selection and scaling account for apparent discrepancies among fact-checkers.
Collapse
Affiliation(s)
- David M. Markowitz
- Department of Communication, Michigan State University, East Lansing, MI, United States of America
| | - Timothy R. Levine
- Department of Communication Studies, University of Alabama at Birmingham, Birmingham, AL, United States of America
| | - Kim B. Serota
- Department of Management and Marketing, School of Business Administration, Oakland University, Oakland, MI, United States of America
| | - Alivia D. Moore
- Department of Communication, Cornell University, Ithaca, NY, United States of America
| |
Collapse
|
8
|
Dunbar NE, Burgoon JK, Chen X, Wang X, Ge S, Huang Q, Nunamaker J. Detecting ulterior motives from verbal cues in group deliberations. Front Psychol 2023; 14:1166225. [PMID: 37292506 PMCID: PMC10244719 DOI: 10.3389/fpsyg.2023.1166225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 05/02/2023] [Indexed: 06/10/2023] Open
Abstract
Introduction Forensic interviewing entails practitioners interviewing suspects to secure valid information and elicit confessions. Such interviews are often conducted in police stations but may also occur in field settings such as border crossings, security checkpoints, bus terminals, and sports venues. Because these real-world interviews often lack experimental control and ground truth, this investigation explored whether results of non-forensic interviews generalize to forensic ones. Methods Organizational espionage was simulated to determine (1) what verbal signals distinguish truth from deception, (2) whether deception in groups aligns with deception in dyads, and (3) whether non-forensic venues can be generalized to forensic ones. Engaging in a mock hiring deliberation, participants (4-5 strangers) reviewed and discussed resumes of potential candidates. Surreptitiously, two group members assigned the role of "organizational spies" attempted to persuade the group to hire an inferior candidate. Each group member presented notes from an interview of "their" candidate, followed by a discussion of all candidates. Spies were to use any means possible, including deception, to persuade others to select their candidate. A financial incentive was offered for having one's candidate chosen. The interview reports and discussions were transcribed and analyzed with SPLICE, an automated text analysis program. Results Deceivers were perceived as less trustworthy than truth-tellers, especially when the naïve players won but overall, deceivers were difficult for non-spies to detect even though they were seen as less trustworthy than the naïve participants. Deceivers' language was more complex and exhibited an "echoing" strategy of repeating others' opinions. This collusion evolved naturally, without pre-planning. No other verbal differences were evident, which suggests that the difference between spies and non-spies was subtle and difficult for truth-tellers to spot. Discussion Whether deception can be successfully detected hinges on a variety of factors including the deceiver's skill to disguise and the detector's ability to sense and process information. Furthermore, the group dynamics and communication context subtly moderate how deception manifests and influence the accuracy of detecting ulterior motives. Our future investigations could encompass non-verbal communication channels and verbal patterns rooted in content, thereby providing a more comprehensive understanding of deception detection.
Collapse
Affiliation(s)
- Norah E. Dunbar
- Department of Communication, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Judee K. Burgoon
- Center for the Management of Information, The University of Arizona, Tucson, AZ, United States
| | - Xunyu Chen
- Center for the Management of Information, The University of Arizona, Tucson, AZ, United States
| | - Xinran Wang
- Center for the Management of Information, The University of Arizona, Tucson, AZ, United States
| | - Saiying Ge
- Center for the Management of Information, The University of Arizona, Tucson, AZ, United States
| | - Qing Huang
- Department of Communication, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Jay Nunamaker
- Center for the Management of Information, The University of Arizona, Tucson, AZ, United States
| |
Collapse
|
9
|
Sarzyńska-Wawer J, Hanusz K, Pawlak A, Szymanowska J, Wawer A. Are Intelligent People Better Liars? Relationships between Cognitive Abilities and Credible Lying. J Intell 2023; 11:jintelligence11040069. [PMID: 37103254 PMCID: PMC10143043 DOI: 10.3390/jintelligence11040069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/27/2023] [Accepted: 03/31/2023] [Indexed: 04/07/2023] Open
Abstract
Lying is essential to social communication. Despite years of research, its detection still poses many challenges. This is partly because some individuals are perceived as truthful and reliable, even when lying. However, relatively little is known about these effective liars. In our study, we focused on the cognitive functioning of effective liars. We tested 400 participants who completed tasks measuring executive functions, verbal fluency, and fluid intelligence, and also made four statements (two true and two false, half of them written and half oral). The reliability of the statements was then assessed. Only fluid intelligence was found to be relevant for reliable lying. This relationship was only evident for oral statements, suggesting that the importance of intelligence is highlighted when statements are made spontaneously without prior preparation.
Collapse
Affiliation(s)
| | - Krzysztof Hanusz
- Institute of Psychology, Polish Academy of Sciences, 03-378 Warsaw, Poland
| | - Aleksandra Pawlak
- Polish-Japanese Academy of Information Technology, 02-008 Warsaw, Poland
| | - Julia Szymanowska
- Institute of Cognitive and Behavioral Neuroscience, University of Social Sciences and Humanities, 03-815 Warsaw, Poland
| | - Aleksander Wawer
- Institute of Computer Sciences, Polish Academy of Sciences, 01-248 Warsaw, Poland
| |
Collapse
|
10
|
Meta-Research Perspectives on Verbal Lie Detection. Brain Sci 2023; 13:brainsci13030392. [PMID: 36979202 PMCID: PMC10046305 DOI: 10.3390/brainsci13030392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/05/2023] [Accepted: 02/19/2023] [Indexed: 02/26/2023] Open
Abstract
Verbal lie detection (VLD) research, conducted mainly in the cognitive sciences, examines content-based indicators that differ between truth-tellers and lie-tellers. Following the meta-research approach (examination of the research itself), the current paper aimed to ascertain the current status of VLD research across three components: the search for VLD indicators, VLD coding, and VLD research tracks. It highlights several issues that challenge VLD research; these are discussed, along with suggestions for how to address them. This scrutiny may contribute to a further advancement of the field of VLD research and, as a result, an improvement of VLD practices.
Collapse
|
11
|
Solà-Sales S, Alzetta C, Moret-Tatay C, Dell’Orletta F. Analysing Deception in Witness Memory through Linguistic Styles in Spontaneous Language. Brain Sci 2023; 13:brainsci13020317. [PMID: 36831859 PMCID: PMC9953826 DOI: 10.3390/brainsci13020317] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/01/2023] [Accepted: 02/07/2023] [Indexed: 02/16/2023] Open
Abstract
The act of lying and its detection have raised interest in many fields, from the legal system to our daily lives. Considering that testimonies are commonly based on linguistic parameters, natural language processing, a research field concerned with programming computers to process and analyse natural language texts or speech, is a topic of interest on this front. This study aimed to examine the linguistic styles of simulated deception and true testimonies collected with the aim of studying witness memory. Study participants were asked to act as a witness of a crime by retelling the story they had just read. Cognitive interviewing techniques were used to collect testimony under two conditions: truth and simulated deception. A sample of 48 participants volunteered to participate in the study. Analyses of the linguistic indicators and content were carried out. Specifically, we performed a comparison of testimonies of the same participant by condition to analyse the variation between (i) lexical and (ii) linguistic features and (iii) content and speech characteristics (disfluencies) depending on the narrative condition. Concerning lexical properties, adjectives were the most-varying grammatical category between truthful and deceptive testimonies. Furthermore, in the linguistic analysis, we observed that truthful testimonies were generally longer than deceptive ones in terms of the number of words and sentences and also characterised by more articulated sentence structures, and these differences were also statistically significant. Regarding the analysis of the content, cognitive criteria (details) and admitting lack of memory were more present in truthful statements. By providing an objective measure, these results are of interest in developing NLP tools for assessing the credibility of testimonies in forensics.
Collapse
Affiliation(s)
- Sara Solà-Sales
- Doctoral School, Catholic University of Valencia San Vicente Mártir, San Agustín 3, Esc. A, Entresuelo 1, 46002 Valencia, Spain
- Correspondence:
| | - Chiara Alzetta
- ItaliaNLP Lab, CNR-Institute for Computational Linguistics “A. Zampolli”, Via G. Moruzzi 1, 56124 Pisa, Italy
| | - Carmen Moret-Tatay
- MEB Lab, Faculty of Psychology, Universidad Católica de Valencia San Vicente Mártir, 14600 Valencia, Spain
| | - Felice Dell’Orletta
- ItaliaNLP Lab, CNR-Institute for Computational Linguistics “A. Zampolli”, Via G. Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
12
|
Markowitz DM, Hancock JT, Woodworth MT, Ely M. Contextual considerations for deception production and detection in forensic interviews. Front Psychol 2023; 14:1134052. [PMID: 36824303 PMCID: PMC9941173 DOI: 10.3389/fpsyg.2023.1134052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/20/2023] [Indexed: 02/10/2023] Open
Abstract
Most deception scholars agree that deception production and deception detection effects often display mixed results across settings. For example, some liars use more emotion than truth-tellers when discussing fake opinions on abortion, but not when communicating fake distress. Similarly, verbal and nonverbal cues are often inconsistent predictors to assist in deception detection, leading to mixed accuracies and detection rates. Why are lie production and detection effects typically inconsistent? In this piece, we argue that aspects of the context are often unconsidered in how lies are produced and detected. Greater theory-building related to contextual constraints of deception are therefore required. We reintroduce and extend the Contextual Organization of Language and Deception (COLD) model, a framework that outlines how psychological dynamics, pragmatic goals, and genre conventions are aspects of the context that moderate the relationship between deception and communication behavior such as language. We extend this foundation by proposing three additional aspects of the context - individual differences, situational opportunities for deception, and interpersonal characteristics - for the COLD model that can specifically inform and potentially improve forensic interviewing. We conclude with a forward-looking perspective for deception researchers and practitioners related to the need for more theoretical explication of deception and its detection related to the context.
Collapse
Affiliation(s)
- David M. Markowitz
- School of Journalism and Communication, University of Oregon, Eugene, OR, United States,*Correspondence: David M. Markowitz, ✉
| | - Jeffrey T. Hancock
- Department of Communication, Stanford University, Stanford, CA, United States
| | - Michael T. Woodworth
- Department of Psychology, University of British Columbia Okanagan, Kelowna, BC, Canada
| | - Maxwell Ely
- School of Journalism and Communication, University of Oregon, Eugene, OR, United States
| |
Collapse
|
13
|
Sarzynska-Wawer J, Pawlak A, Szymanowska J, Hanusz K, Wawer A. Truth or lie: Exploring the language of deception. PLoS One 2023; 18:e0281179. [PMID: 36730363 PMCID: PMC9894434 DOI: 10.1371/journal.pone.0281179] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 01/17/2023] [Indexed: 02/03/2023] Open
Abstract
Lying appears in everyday oral and written communication. As a consequence, detecting it on the basis of linguistic analysis is particularly important. Our study aimed to verify whether the differences between true and false statements in terms of complexity and sentiment that were reported in previous studies can be confirmed using tools dedicated to measuring those factors. Further, we investigated whether linguistic features that differentiate true and false utterances in English-namely utterance length, concreteness, and particular parts-of-speech-are also present in the Polish language. We analyzed nearly 1,500 true and false statements, half of which were transcripts while the other half were written statements. Our results show that false statements are less complex in terms of vocabulary, are more concise and concrete, and have more positive words and fewer negative words. We found no significant differences between spoken and written lies. Using this data, we built classifiers to automatically distinguish true from false utterances, achieving an accuracy of 60%. Our results provide a significant contribution to previous conclusions regarding linguistic deception indicators.
Collapse
Affiliation(s)
| | | | | | - Krzysztof Hanusz
- Institute of Psychology, Polish Academy of Sciences, Warszawa, Poland
| | - Aleksander Wawer
- Institute of Computer Sciences, Polish Academy of Sciences, Warszawa, Poland
| |
Collapse
|
14
|
Yang C, You X, Xie X, Duan Y, Wang B, Zhou Y, Feng H, Wang W, Fan L, Huang G, Shen X. Development of a Chinese werewolf deception database. Front Psychol 2023; 13:1047427. [PMID: 36698609 PMCID: PMC9869050 DOI: 10.3389/fpsyg.2022.1047427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Accepted: 12/15/2022] [Indexed: 01/11/2023] Open
Abstract
Although it is important to accurately detect deception, limited research in this area has been undertaken involving Asian people. We aim to address this gap by undertaking research regarding the identification of deception in Asians in realistic environments. In this study, we develop a Chinese Werewolf Deception Database (C2W2D), which consists of 168 video clips (84 deception videos and 84 honest videos). A total of 1,738,760 frames of facial data are recorded. Fifty-eight healthy undergraduates (24 men and 34 women) and 26 drug addicts (26 men) participated in a werewolf game. The development of C2W2D is accomplished based on a "werewolf" deception game paradigm in which the participants spontaneously tell the truth or a lie. Two synced high-speed cameras are used to capture the game process. To explore the differences between lying and truth-telling in the database, descriptive statistics (e.g., duration and quantity) and hypothesis tests are conducted using action units (AUs) of facial expressions (e.g., t-test). The C2W2D contributes to a relatively sizable number of deceptive and honest samples with high ecological validity. These samples can be used to study the individual differences and the underlying mechanisms of lies and truth-telling between drug addicts and healthy people.
Collapse
Affiliation(s)
- Chaocao Yang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China,School of Psychology, Shaanxi Normal University, Xi’an, China,Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Shaanxi Normal University, Xi’an, China
| | - Xuqun You
- School of Psychology, Shaanxi Normal University, Xi’an, China,Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Shaanxi Normal University, Xi’an, China
| | - Xudong Xie
- School of Psychology, Shaanxi Normal University, Xi’an, China,Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Shaanxi Normal University, Xi’an, China
| | - Yuanyuan Duan
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Buxue Wang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Yuxi Zhou
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Hong Feng
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Wenjing Wang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Ling Fan
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Genying Huang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Xunbing Shen
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China,*Correspondence: Xunbing Shen,
| |
Collapse
|
15
|
Dykstra VW, Lyon TD, Evans AD. Maltreated and non-maltreated children's truthful and dishonest reports: Linguistic and syntactic differences. Front Psychol 2022; 13:1025419. [PMID: 36591106 PMCID: PMC9797025 DOI: 10.3389/fpsyg.2022.1025419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction Adults are typically poor judges of the veracity of statements, requiring the need for alternative methods for detecting lies. One alternative method to human lie-detectors is using computer-based linguistic analysis which may present a more reliable method for detecting dishonesty. Moreover, while previous research has examined linguistic differences between typically developing children's and adults' truthful and dishonest reports, no study to date has examined whether maltreated children exhibit different linguistic cues to dishonesty. Thus, the current study examined maltreated and nonmaltreated children's linguistic and syntactic cues to children's truthful and dishonest reports. Methods Nine- to 12-year-olds, half of whom were maltreated, played a computer game with a confederate: half of the children experienced a transgression (i.e., playing a forbidden game and crashing the computer) and were coached to conceal it, and half of the children experienced no transgression (i.e., simply played a computer game). All children were then interviewed about the event. The current study utilized automated linguistic and syntactic analysis software to compare children's truthful reports (no transgression occurred) with dishonest reports. Results and Discussion Results indicated that maltreated and non-maltreated children did not differ in their indicators of dishonesty. Dishonest reporters used more first-person plural pronouns and cognitive mechanism terms and had less syntactically complex reports compared to truthful reporters. Finally, first-personal plural pronouns, cognitive mechanism terms, and syntactic complexity accurately classified (74.2%) the veracity of children's reports. The current findings present a new indicator of dishonesty (syntactic complexity) and suggest that indicators from typically developing populations may apply to maltreated children when coaching occurred.
Collapse
Affiliation(s)
- Victoria W. Dykstra
- Psychology Department, Brock University, St. Catharines, ON, Canada,*Correspondence: Victoria W. Dykstra,
| | - Thomas D. Lyon
- Gould School of Law, University of Southern California, Los Angeles, CA, United States
| | - Angela D. Evans
- Psychology Department, Brock University, St. Catharines, ON, Canada
| |
Collapse
|
16
|
Vrij A, Granhag PA, Ashkenazi T, Ganis G, Leal S, Fisher RP. Verbal Lie Detection: Its Past, Present and Future. Brain Sci 2022; 12:brainsci12121644. [PMID: 36552104 PMCID: PMC9775025 DOI: 10.3390/brainsci12121644] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/22/2022] [Accepted: 11/27/2022] [Indexed: 12/02/2022] Open
Abstract
This article provides an overview of verbal lie detection research. This type of research began in the 1970s with examining the relationship between deception and specific words. We briefly review this initial research. In the late 1980s, Criteria-Based Content Analysis (CBCA) emerged, a veracity assessment tool containing a list of verbal criteria. This was followed by Reality Monitoring (RM) and Scientific Content Analysis (SCAN), two other veracity assessment tools that contain lists of verbal criteria. We discuss their contents, theoretical rationales, and ability to identify truths and lies. We also discuss similarities and differences between CBCA, RM, and SCAN. In the mid 2000s, 'Interviewing to deception' emerged, with the goal of developing specific interview protocols aimed at enhancing or eliciting verbal veracity cues. We outline the four most widely researched interview protocols to date: the Strategic Use of Evidence (SUE), Verifiability Approach (VA), Cognitive Credibility Assessment (CCA), and Reality Interviewing (RI). We briefly discuss the working of these protocols, their theoretical rationales and empirical support, as well as the similarities and differences between them. We conclude this article with elaborating on how neuroscientists can inform and improve verbal lie detection.
Collapse
Affiliation(s)
- Aldert Vrij
- Department of Psychology, University of Portsmouth, Portsmouth PO1 2DY, UK
- Correspondence:
| | - Pär Anders Granhag
- Department of Psychology, University of Gothenburg, 405 30 Gothenburg, Sweden
| | - Tzachi Ashkenazi
- Department of Criminology, Bar Ilan University, Ramat Gan 5290002, Israel
- Department of Criminology, Ashkelon Academic College, Ashkelon 78211, Israel
| | - Giorgio Ganis
- School of Psychology, Brain Research and Imaging Centre, University of Plymouth, Plymouth PL4 8AA, UK
| | - Sharon Leal
- Department of Psychology, University of Portsmouth, Portsmouth PO1 2DY, UK
| | - Ronald P. Fisher
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| |
Collapse
|
17
|
Zhao J, Fu C, Kang X. Content characteristics predict the putative authenticity of COVID-19 rumors. Front Public Health 2022; 10:920103. [PMID: 36033743 PMCID: PMC9399738 DOI: 10.3389/fpubh.2022.920103] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 07/18/2022] [Indexed: 01/22/2023] Open
Abstract
Rumors regarding COVID-19 have been prevalent on the Internet and affect the control of the COVID-19 pandemic. Using 1,296 COVID-19 rumors collected from an online platform (piyao.org.cn) in China, we found measurable differences in the content characteristics between true and false rumors. We revealed that the length of a rumor's headline is negatively related to the probability of a rumor being true [odds ratio (OR) = 0.37, 95% CI (0.30, 0.44)]. In contrast, the length of a rumor's statement is positively related to this probability [OR = 1.11, 95% CI (1.09, 1.13)]. In addition, we found that a rumor is more likely to be true if it contains concrete places [OR = 20.83, 95% CI (9.60, 48.98)] and it specifies the date or time of events [OR = 22.31, 95% CI (9.63, 57.92)]. The rumor is also likely to be true when it does not evoke positive or negative emotions [OR = 0.15, 95% CI (0.08, 0.29)] and does not include a call for action [OR = 0.06, 95% CI (0.02, 0.12)]. By contrast, the presence of source cues [OR = 0.64, 95% CI (0.31, 1.28)] and visuals [OR = 1.41, 95% CI (0.53, 3.73)] is related to this probability with limited significance. Our findings provide some clues for identifying COVID-19 rumors using their content characteristics.
Collapse
Affiliation(s)
- Jingyi Zhao
- College of International Studies, Southwest University, Chongqing, China
| | - Cun Fu
- School of Foreign Languages and Cultures, Chongqing University, Chongqing, China
| | - Xin Kang
- School of Foreign Languages and Cultures, Chongqing University, Chongqing, China
- Research Center for Language, Cognition and Language Application, Chongqing University, Chongqing, China
| |
Collapse
|
18
|
Volz S, Reinhard MA, Müller P. Does Ego Depletion Elicit Stronger Cues of Deception? OPEN PSYCHOLOGY 2022. [DOI: 10.1515/psych-2022-0129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
Abstract
Lying is cognitively demanding and presumably requires self-regulation. According to ego depletion theory, a task that requires self-regulation should therefore impair an individual’s ability to tell a convincing lie in a later task. Consequently, it was hypothesized that a manipulation of ego depletion would enhance behavioral differences between liars and truth-tellers. To manipulate ego depletion, participants worked (vs. did not work) on a task in which they had to suppress dominant responses while copying a text. Subsequently, they talked in a simulated job interview about a job they had previously held (vs. not held). In the sample of 164 participants, there was no evidence to support the hypothesis; the expected Ego Depletion x Veracity interaction was not significant for any of the 15 behavioral cues coded in the videotaped interviews. Although the main effect of ego depletion was significant at the multivariate level for the first of two parts of the interview, none of the univariate main effects reached the significance level corrected for multiple testing. Bayesian analyses rendered moderate to strong evidence in favor of the null hypothesis. Possible implications of the results are discussed, also those related to ego depletion theory.
Collapse
Affiliation(s)
- Sarah Volz
- Department of Psychology , University of Kassel
| | | | - Patrick Müller
- Faculty of Civil Engineering, Building Physics, and Business , University of Applied Sciences Stuttgart
| |
Collapse
|
19
|
Van Der Zee S, Poppe R, Havrileck A, Baillon A. A Personal Model of Trumpery: Linguistic Deception Detection in a Real-World High-Stakes Setting. Psychol Sci 2021; 33:3-17. [PMID: 34932410 DOI: 10.1177/09567976211015941] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Language use differs between truthful and deceptive statements, but not all differences are consistent across people and contexts, complicating the identification of deceit in individuals. By relying on fact-checked tweets, we showed in three studies (Study 1: 469 tweets; Study 2: 484 tweets; Study 3: 24 models) how well personalized linguistic deception detection performs by developing the first deception model tailored to an individual: the 45th U.S. president. First, we found substantial linguistic differences between factually correct and factually incorrect tweets. We developed a quantitative model and achieved 73% overall accuracy. Second, we tested out-of-sample prediction and achieved 74% overall accuracy. Third, we compared our personalized model with linguistic models previously reported in the literature. Our model outperformed existing models by 5 percentage points, demonstrating the added value of personalized linguistic analysis in real-world settings. Our results indicate that factually incorrect tweets by the U.S. president are not random mistakes of the sender.
Collapse
Affiliation(s)
- Sophie Van Der Zee
- Department of Applied Economics, Erasmus School of Economics, Erasmus University Rotterdam
| | - Ronald Poppe
- Department of Information and Computing Sciences, Utrecht University
| | - Alice Havrileck
- Department of Applied Economics, Erasmus School of Economics, Erasmus University Rotterdam.,Department of Economics and Management, École Normale Supérieure Paris-Saclay
| | - Aurélien Baillon
- Department of Applied Economics, Erasmus School of Economics, Erasmus University Rotterdam
| |
Collapse
|
20
|
Berry‐Blunt AK, Holtzman NS, Donnellan MB, Mehl MR. The story of “I” tracking: Psychological implications of self‐referential language use. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2021. [DOI: 10.1111/spc3.12647] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
| | | | - M. Brent Donnellan
- Department of Psychology Michigan State University East Lansing Michigan USA
| | - Matthias R. Mehl
- Department of Psychology University of Arizona Tucson Arizona USA
| |
Collapse
|
21
|
Abstract
In the last decade, fields such as psychology and natural language processing have devoted considerable attention to the automatization of the process of deception detection, developing and employing a wide array of automated and computer-assisted methods for this purpose. Similarly, another emerging research area is focusing on computer-assisted deception detection using linguistics, with promising results. Accordingly, in the present article, the reader is firstly provided with an overall review of the state of the art of corpus-based research exploring linguistic cues to deception as well as an overview on several approaches to the study of deception and on previous research into its linguistic detection. In an effort to promote corpus-based research in this context, this study explores linguistic cues to deception in the Spanish written language with the aid of an automatic text classification tool, by means of an ad hoc corpus containing ground truth data. Interestingly, the key findings reveal that, although there is a set of linguistic cues which contributes to the global statistical classification model, there are some discursive differences across the subcorpora, yielding better classification results on the analysis conducted on the subcorpus containing emotionally loaded language.
Collapse
|
22
|
Finding Evidence of Fraudster Companies in the CEO’s Letter to Shareholders with Sentiment Analysis. INFORMATION 2021. [DOI: 10.3390/info12080307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The goal of our research was to assess whether the observation about deceptive texts having a lower positive tone than truthful ones in terms of sentiment could become operative and be used for building a classifier in the particular case of fraudster’s letters written in Spanish. The data were the letters that CEOs address to company shareholders in their annual financial reports, and the task was to identify the letters of companies that committed financial misconduct or fraud. This case was challenging for two reasons: first, most of the research worked with spontaneous written or spoken texts, while these letters did not; second, most of the research in this area worked on English texts, while we validated the linguistic cues found as evidence of deception for Spanish texts. The results of our research confirm that an SVM trained with a bag-of-words model of frequent adjectives can achieve 81% accuracy because these adjectives bring the information about which positive or negative tone and which word combinations in a text turn out to be a characteristic of fraudster’s texts.
Collapse
|
23
|
Sporer SL, Ulatowska J. Indirect and Unconscious Deception Detection: Too Soon to Give Up? Front Psychol 2021; 12:601852. [PMID: 34122212 PMCID: PMC8188332 DOI: 10.3389/fpsyg.2021.601852] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 04/12/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
| | - Joanna Ulatowska
- Department of Psychology, Nicolaus Copernicus University, Toruń, Poland
| |
Collapse
|
24
|
Ansari S, Gupta S. Customer perception of the deceptiveness of online product reviews: A speech act theory perspective. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2021. [DOI: 10.1016/j.ijinfomgt.2020.102286] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
25
|
Meissner CA. “What works?” Systematic reviews and m
eta‐analyses
of the investigative interviewing research literature. APPLIED COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1002/acp.3808] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
26
|
Identifying Fake News on Social Networks Based on Natural Language Processing: Trends and Challenges. INFORMATION 2021. [DOI: 10.3390/info12010038] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The epidemic spread of fake news is a side effect of the expansion of social networks to circulate news, in contrast to traditional mass media such as newspapers, magazines, radio, and television. Human inefficiency to distinguish between true and false facts exposes fake news as a threat to logical truth, democracy, journalism, and credibility in government institutions. In this paper, we survey methods for preprocessing data in natural language, vectorization, dimensionality reduction, machine learning, and quality assessment of information retrieval. We also contextualize the identification of fake news, and we discuss research initiatives and opportunities.
Collapse
|
27
|
Tomas F, Dodier O, Demarchi S. Baselining affects the production of deceptive narratives. APPLIED COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1002/acp.3768] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Frédéric Tomas
- Department of Psychology Université Paris 8 Saint‐Denis France
| | - Olivier Dodier
- Department of Psychology Université de Nantes Nantes France
| | - Samuel Demarchi
- Department of Psychology Université Paris 8 Saint‐Denis France
| |
Collapse
|
28
|
Markowitz DM. The deception faucet: A metaphor to conceptualize deception and its detection. NEW IDEAS IN PSYCHOLOGY 2020. [DOI: 10.1016/j.newideapsych.2020.100816] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
29
|
Twyman NW, Pentland SJ, Spitzley L. Design Principles for Signal Detection in Modern Job Application Systems: Identifying Fabricated Qualifications. J MANAGE INFORM SYST 2020. [DOI: 10.1080/07421222.2020.1790201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Nathan W. Twyman
- Department of Information Systems, Marriott School of Business, Brigham Young University , Provo, UT, USA
| | - Steven J. Pentland
- Information Technology and Supply Chain Management, College of Business and Economics, Boise State University , Boise, ID, USA
| | - Lee Spitzley
- Information Security and Digital Forensics, School of Business,University at Albany, State University of New York , Albany, NY, USA
| |
Collapse
|
30
|
Tomas F, Tsimperidis I, Demarchi S, El Massioui F. Keyboard dynamics discrepancies between baseline and deceptive eyewitness narratives. APPLIED COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1002/acp.3743] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Frédéric Tomas
- Human and Artificial Cognitions Laboratory, Department of Psychology University Paris 8 Saint‐Denis France
| | - Ioannis Tsimperidis
- Department of Electrical and Computer Engineering Democritus University of Thrace Komotini Greece
| | - Samuel Demarchi
- Human and Artificial Cognitions Laboratory, Department of Psychology University Paris 8 Saint‐Denis France
| | - Farid El Massioui
- Human and Artificial Cognitions Laboratory, Department of Psychology University Paris 8 Saint‐Denis France
| |
Collapse
|
31
|
Predictive modeling for trustworthiness and other subjective text properties in online nutrition and health communication. PLoS One 2020; 15:e0237144. [PMID: 32760095 PMCID: PMC7410325 DOI: 10.1371/journal.pone.0237144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 07/21/2020] [Indexed: 11/19/2022] Open
Abstract
While the internet has democratized and accelerated content creation and sharing, it has also made people more vulnerable to manipulation and misinformation. Also, the received information can be distorted by psychological biases. This is problematic especially in health-related communications which can greatly affect the quality of life of individuals. We assembled and analyzed 364 texts related to nutrition and health from Finnish online sources, such as news, columns and blogs, and asked non-experts to subjectively evaluate the texts. Texts were rated for their trustworthiness, sentiment, logic, information, clarity, and neutrality properties. We then estimated individual biases and consensus ratings that were used in training regression models. Firstly, we found that trustworthiness was significantly correlated to the information, neutrality and logic of the texts. Secondly, individual ratings for information and logic were significantly biased by the age and diet of the raters. Our best regression models explained up to 70% of the total variance of consensus ratings based on the low-level properties of texts, such as semantic embeddings, presence of key-terms and part-of-speech tags, references, quotes and paragraphs. With a novel combination of crowdsourcing, behavioral analysis, natural language processing and predictive modeling, our study contributes to the automated identification of reliable and high-quality online information. While critical evaluation of truthfulness cannot be surrendered to the machine only, our findings provide new insights into automated evaluation of subjective text properties and analysis of morphologically-rich languages in regards to trustworthiness.
Collapse
|
32
|
Affiliation(s)
- Thomas Holtgraves
- Department of Psychological Science, Ball State University
- School of Communication Studies, Ohio University
| | - Elizabeth Jenkins
- Department of Psychological Science, Ball State University
- School of Communication Studies, Ohio University
| |
Collapse
|
33
|
Volz S, Reinhard M, Müller P. Why don't you believe me? Detecting deception in messages written by nonnative and native speakers. APPLIED COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1002/acp.3615] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Sarah Volz
- Department of PsychologyUniversity of Kassel Kassel Germany
| | | | - Patrick Müller
- Faculty of Civil Engineering, Building Physics, and BusinessUniversity of Applied Sciences Stuttgart Stuttgart Germany
| |
Collapse
|
34
|
Li D, Santos E. Discriminating deception from truth and misinformation: an intent-level approach. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1652354] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Deqing Li
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - Eugene Santos
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
35
|
Leng H, Wang Y, Li Q, Yang L, Sun Y. Sophisticated Deception in Junior Middle School Students: An ERP Study. Front Psychol 2019; 9:2675. [PMID: 30687155 PMCID: PMC6336891 DOI: 10.3389/fpsyg.2018.02675] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 12/13/2018] [Indexed: 11/13/2022] Open
Abstract
Sophisticated deception refers to the deception of others based on inferences of their mental states (e.g., answering honestly when inferring that the other will not believe their answer). Studying the brain mechanism of sophisticated deception in junior middle school students can provide physiological evidence for deception detection and deceptive ability measurement. Sixteen junior middle school students were asked to engage in different trial types (i.e., instructed truth/lie and chosen truth/lie), during which we recorded their response times (RT) along with electroencephalographic data to calculate event-related potentials (ERPs). We observed significant differences in amplitude [N2, P3, N450, and medial frontal negativity (MFN)] between chosen reactions (sophisticated deception and simple deception) and instructed reactions (instructed truth and instructed lie) in both the stimulus presentation and feedback stages. In the former, the task scores of participants in the chosen condition were significantly and positively correlated with the N2 amplitude over the central brain area during sophisticated deception. In the latter, the task scores of participants in the chosen condition were negatively correlated with the MFN amplitude over the left frontal and left frontocentral regions. Overall, deception intention, rather than simply making counterfactual statements, appears to underlie the increased demand for cognitive control in deceivers. This can be attributed to deceivers' need to strongly consider their opponent's mental state-the better the deceivers' deceptive ability, the more they will make conjectures about the mental state of their opponent with sophisticated deception and monitor conflict; the less conflict they experience while answering honestly with the intention to deceive, the more conflict may arise when the results of their deception are inconsistent with these conjectures.
Collapse
Affiliation(s)
- Haizhou Leng
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Yanrong Wang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Qian Li
- Xingtai Special Education School, Xingtai, China
| | - Lizhu Yang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Yan Sun
- School of Psychology, Liaoning Normal University, Dalian, China
| |
Collapse
|
36
|
Manzanero AL, Scott MT, Vallet R, Aróztegui J, Bull R. Criteria-based Content Analysis in True and Simulated Victims with Intellectual Disability. ANUARIO DE PSICOLOGÍA JURÍDICA 2019. [DOI: 10.5093/apj2019a1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
37
|
Wu J, Liu Y. Deception detection methods incorporating discourse network metrics in synchronous computer-mediated communication. J Inf Sci 2019. [DOI: 10.1177/0165551518823176] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The prevalence of deception in computer-mediated communication and the risk of misjudgement based on deceptive information call for effective detection methods of deception. Extant models for online deception detection rely mainly on verbal behaviours of participants while largely ignoring context. Discourse behaviour analysis, which can better investigate the information in context, has been proved effective for online deception detection; nevertheless, these discourse behaviours have been analysed in isolation without referring to other behaviours in context. To achieve the ultimate goal of effective prediction of deception in synchronous computer-mediated communication, this research exploits temporal networks in uncovering the dynamics of deception behaviours, proposes novel deception detection methods using discourse network metrics as predictive features, and empirically evaluates the performances of deception detection methods incorporating three types of predictive features (non-discourse features, discourse features and discourse network metrics). The results suggest that discourse network features are more effective in detecting deception and incorporating these features with non-discourse and discourse features can significantly improve the performance of deception detection. The findings not only demonstrate the efficacy of structural features in deception detection but also offer both methodological and theoretical contributions to deception detection from the perspective of temporal network.
Collapse
Affiliation(s)
- Jiang Wu
- School of Economic Information Engineering, Southwestern University of Finance and Economics, China
- Sichuan Province Key Laboratory of Financial Intelligence and Financial Engineering, Southwestern University of Finance and Economics, China
| | - Yangyang Liu
- School of Economic Information Engineering, Southwestern University of Finance and Economics, China
| |
Collapse
|
38
|
Advancing the assessment of automated deception detection systems: Incorporating base rate and cost into system evaluation. INFORMATION SYSTEMS JOURNAL 2018. [DOI: 10.1111/isj.12231] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
39
|
Jupe LM, Vrij A, Leal S, Nahari G. Are you for real? Exploring language use and unexpected process questions within the detection of identity deception. APPLIED COGNITIVE PSYCHOLOGY 2018. [DOI: 10.1002/acp.3446] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
| | - Aldert Vrij
- Department of Psychology; University of Portsmouth; Portsmouth UK
| | - Sharon Leal
- Department of Psychology; University of Portsmouth; Portsmouth UK
| | - Galit Nahari
- Department of Criminology; Bar-Ilan University; Ramat Gan Israel
| |
Collapse
|
40
|
Warren KL, Peterson C, Gillingham CC. Children who are coached to lie: does linguistic analysis help in understanding why these children are so believable? PSYCHIATRY, PSYCHOLOGY, AND LAW : AN INTERDISCIPLINARY JOURNAL OF THE AUSTRALIAN AND NEW ZEALAND ASSOCIATION OF PSYCHIATRY, PSYCHOLOGY AND LAW 2018; 25:789-805. [PMID: 31984053 PMCID: PMC6818314 DOI: 10.1080/13218719.2018.1478336] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this study, the usefulness of linguistic analysis in determining the veracity of children's accounts is examined. The Linguistic Inquiry Word Count 2007 program was used to analyze 95 stories told by 5- to 14-year-olds who were telling the truth or a lie about the stressful experience of breaking a bone or requiring sutures for serious lacerations. Half of the children were coached by parents in preparing their story over the four days prior to giving their account. Differences emerged in the linguistic style used as a function of age, presence of coaching and event veracity. Very few linguistic categories emerged as significant predictors of event veracity, and the variables that did emerge were different depending upon the presence of coaching. Since in real-life situations one seldom knows a child's coaching history, these findings suggest that it is inappropriate to use linguistic analysis to assess the veracity of children's accounts.
Collapse
Affiliation(s)
- Kelly L. Warren
- Psychology Program, Memorial University of Newfoundland, Corner Brook, Canada
| | - Carole Peterson
- Department of Psychology, Memorial University of Newfoundland, St. John's, Canada
| | - Cassy C. Gillingham
- Department of Psychology, Memorial University of Newfoundland, St. John's, Canada
| |
Collapse
|
41
|
Kleinberg B, van der Toolen Y, Vrij A, Arntz A, Verschuere B. Automated verbal credibility assessment of intentions: The model statement technique and predictive modeling. APPLIED COGNITIVE PSYCHOLOGY 2018; 32:354-366. [PMID: 29861544 PMCID: PMC5969289 DOI: 10.1002/acp.3407] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 02/15/2018] [Accepted: 02/20/2018] [Indexed: 02/06/2023]
Abstract
Recently, verbal credibility assessment has been extended to the detection of deceptive intentions, the use of a model statement, and predictive modeling. The current investigation combines these 3 elements to detect deceptive intentions on a large scale. Participants read a model statement and wrote a truthful or deceptive statement about their planned weekend activities (Experiment 1). With the use of linguistic features for machine learning, more than 80% of the participants were classified correctly. Exploratory analyses suggested that liars included more person and location references than truth‐tellers. Experiment 2 examined whether these findings replicated on independent‐sample data. The classification accuracies remained well above chance level but dropped to 63%. Experiment 2 corroborated the finding that liars' statements are richer in location and person references than truth‐tellers' statements. Together, these findings suggest that liars may over‐prepare their statements. Predictive modeling shows promise as an automated veracity assessment approach but needs validation on independent data.
Collapse
Affiliation(s)
- Bennett Kleinberg
- Department of Psychology University of Amsterdam Amsterdam The Netherlands
| | | | - Aldert Vrij
- Department of Psychology University of Portsmouth Portsmouth UK
| | - Arnoud Arntz
- Department of Psychology University of Amsterdam Amsterdam The Netherlands
| | - Bruno Verschuere
- Department of Psychology University of Amsterdam Amsterdam The Netherlands
| |
Collapse
|
42
|
Masip J, Martínez C, Blandón-Gitlin I, Sánchez N, Herrero C, Ibabe I. Learning to Detect Deception from Evasive Answers and Inconsistencies across Repeated Interviews: A Study with Lay Respondents and Police Officers. Front Psychol 2018; 8:2207. [PMID: 29354078 PMCID: PMC5758596 DOI: 10.3389/fpsyg.2017.02207] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 12/04/2017] [Indexed: 11/13/2022] Open
Abstract
Previous research has shown that inconsistencies across repeated interviews do not indicate deception because liars deliberately tend to repeat the same story. However, when a strategic interview approach that makes it difficult for liars to use the repeat strategy is used, both consistency and evasive answers differ significantly between truth tellers and liars, and statistical software (binary logistic regression analyses) can reach high classification rates (Masip et al., 2016b). Yet, if the interview procedure is to be used in applied settings the decision process will be made by humans, not statistical software. To address this issue, in the current study, 475 college students (Experiment 1) and 142 police officers (Experiment 2) were instructed to code and use consistency, evasive answers, or a combination or both before judging the veracity of Masip et al.'s (2016b) interview transcripts. Accuracy rates were high (60% to over 90%). Evasive answers yielded higher rates than consistency, and the combination of both these cues produced the highest accuracy rates in identifying both truthful and deceptive statements. Uninstructed participants performed fairly well (around 75% accuracy), apparently because they spontaneously used consistency and evasive answers. The pattern of results was the same among students, all officers, and veteran officers only, and shows that inconsistencies between interviews and evasive answers reveal deception when a strategic interview approach that hinders the repeat strategy is used.
Collapse
Affiliation(s)
- Jaume Masip
- Department of Social Psychology and Anthropology, University of Salamanca, Salamanca, Spain
| | - Carmen Martínez
- Department of Social Psychology and Anthropology, University of Salamanca, Salamanca, Spain
| | - Iris Blandón-Gitlin
- Department of Psychology, California State University Fullerton, Fullerton, CA, United States
| | - Nuria Sánchez
- Department of Social Psychology and Anthropology, University of Salamanca, Salamanca, Spain
| | - Carmen Herrero
- Department of Social Psychology and Anthropology, University of Salamanca, Salamanca, Spain
| | - Izaskun Ibabe
- Department of Social Psychology and Methodology of the Behavioral Sciences, University of the Basque Country, San Sebastián, Spain
| |
Collapse
|
43
|
Burgoon JK. Separating the Wheat From the Chaff: Guidance From New Technologies for Detecting Deception in the Courtroom. Front Psychiatry 2018; 9:774. [PMID: 30705646 PMCID: PMC6344437 DOI: 10.3389/fpsyt.2018.00774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Accepted: 12/24/2018] [Indexed: 11/17/2022] Open
Affiliation(s)
- Judee K Burgoon
- Center for the Management of Information, University of Arizona, Tucson, AZ, United States
| |
Collapse
|
44
|
Bond GD, Holman RD, Eggert JAL, Speller LF, Garcia ON, Mejia SC, Mcinnes KW, Ceniceros EC, Rustige R. ‘Lyin' Ted’, ‘Crooked Hillary’, and ‘Deceptive Donald’: Language of Lies in the 2016 US Presidential Debates. APPLIED COGNITIVE PSYCHOLOGY 2017. [DOI: 10.1002/acp.3376] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Gary D. Bond
- Department of Psychology; Eastern New Mexico University; Portales USA
| | - Rebecka D. Holman
- Department of Psychology; Eastern New Mexico University; Portales USA
| | | | | | - Olivia N. Garcia
- Department of Psychology; Eastern New Mexico University; Portales USA
| | - Sasha C. Mejia
- Department of Psychology; Eastern New Mexico University; Portales USA
| | - Kohlby W. Mcinnes
- Department of Psychology; Eastern New Mexico University; Portales USA
| | | | - Rebecca Rustige
- Department of Psychology; Eastern New Mexico University; Portales USA
| |
Collapse
|
45
|
Kleinberg B, Mozes M, Arntz A, Verschuere B. Using Named Entities for Computer-Automated Verbal Deception Detection. J Forensic Sci 2017; 63:714-723. [PMID: 28940300 DOI: 10.1111/1556-4029.13645] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2017] [Revised: 08/24/2017] [Accepted: 08/28/2017] [Indexed: 12/19/2022]
Abstract
There is an increasing demand for automated verbal deception detection systems. We propose named entity recognition (NER; i.e., the automatic identification and extraction of information from text) to model three established theoretical principles: (i) truth tellers provide accounts that are richer in detail, (ii) contain more contextual references (specific persons, locations, and times), and (iii) deceivers tend to withhold potentially checkable information. We test whether NER captures these theoretical concepts and can automatically identify truthful versus deceptive hotel reviews. We extracted the proportion of named entities with two NER tools (spaCy and Stanford's NER) and compared the discriminative ability to a lexicon word count approach (LIWC) and a measure of sentence specificity (speciteller). Named entities discriminated truthful from deceptive hotel reviews above chance level, and outperformed the lexicon approach and sentence specificity. This investigation suggests that named entities may be a useful addition to existing automated verbal deception detection approaches.
Collapse
Affiliation(s)
- Bennett Kleinberg
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129 D, 1018 WS, Amsterdam, The Netherlands
| | - Maximilian Mozes
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129 D, 1018 WS, Amsterdam, The Netherlands.,Department of Informatics, Technical University of Munich, Boltzmannstr. 3, Garching near, Munich, Germany
| | - Arnoud Arntz
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129 D, 1018 WS, Amsterdam, The Netherlands
| | - Bruno Verschuere
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129 D, 1018 WS, Amsterdam, The Netherlands
| |
Collapse
|
46
|
Heering N, Volbert R. The Individual Depictive Style: Individual Differences in Narrating Personal Experiences. APPLIED COGNITIVE PSYCHOLOGY 2017. [DOI: 10.1002/acp.3319] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Nina Heering
- Charité-Universitaetsmedizin Berlin; Berlin Germany
| | - Renate Volbert
- Charité-Universitaetsmedizin Berlin; Berlin Germany
- Psychologische Hochschule Berlin; Berlin Germany
| |
Collapse
|
47
|
Kleinberg B, Nahari G, Arntz A, Verschuere B. An Investigation on the Detectability of Deceptive Intent about Flying through Verbal Deception Detection. COLLABRA: PSYCHOLOGY 2017. [DOI: 10.1525/collabra.80] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Background:
Academic research on deception detection has largely focused on the detection of past events. For many applied purposes, however, the detection of false reports about someone’s intention merits attention. Based on the verbal deception detection paradigm, we explored whether true statements on intentions were more detailed and more specific than false statements on intentions, particularly when instructed to be as specific as possible.
Method:
Participants (n = 222) lied or told the truth about their upcoming travel plans either providing ‘as much information as possible’ (standard instructions) or being ‘as specific as possible’ (i.e., mentioning times, locations, places; specific instructions), resulting in four conditions (truthful vs. deceptive intention by standard vs. specific instructions). We collected data via a custom-made web app and performed automated verbal content analysis of participants’ written answers.
Findings:
We did not find a significant difference in the specificity of participants’ statements. The instruction to be as specific as possible promoted more specific information but did not help to discern honest from deceptive flying intentions.
Conclusion:
The experiment reported here attempted to demonstrate automated verbal deception detection of intentions. The difficulty in capturing genuine intentions, and the non-intrusive, non-interactive questioning approach might explain the null findings and raise questions for further research. We conclude with suggestions for a novel framework on semi-interactive information elicitation.
Collapse
|
48
|
Masip J, Blandón-Gitlin I, Martínez C, Herrero C, Ibabe I. Strategic Interviewing to Detect Deception: Cues to Deception across Repeated Interviews. Front Psychol 2016; 7:1702. [PMID: 27847493 PMCID: PMC5088571 DOI: 10.3389/fpsyg.2016.01702] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 10/17/2016] [Indexed: 11/13/2022] Open
Abstract
Previous deception research on repeated interviews found that liars are not less consistent than truth tellers, presumably because liars use a “repeat strategy” to be consistent across interviews. The goal of this study was to design an interview procedure to overcome this strategy. Innocent participants (truth tellers) and guilty participants (liars) had to convince an interviewer that they had performed several innocent activities rather than committing a mock crime. The interview focused on the innocent activities (alibi), contained specific central and peripheral questions, and was repeated after 1 week without forewarning. Cognitive load was increased by asking participants to reply quickly. The liars’ answers in replying to both central and peripheral questions were significantly less accurate, less consistent, and more evasive than the truth tellers’ answers. Logistic regression analyses yielded classification rates ranging from around 70% (with consistency as the predictor variable), 85% (with evasive answers as the predictor variable), to over 90% (with an improved measure of consistency that incorporated evasive answers as the predictor variable, as well as with response accuracy as the predictor variable). These classification rates were higher than the interviewers’ accuracy rate (54%).
Collapse
Affiliation(s)
- Jaume Masip
- Department of Social Psychology and Anthropology, University of Salamanca Salamanca, Spain
| | - Iris Blandón-Gitlin
- Department of Psychology, California State University Fullerton, Fullerton CA, USA
| | - Carmen Martínez
- Department of Social Psychology and Anthropology, University of Salamanca Salamanca, Spain
| | - Carmen Herrero
- Department of Social Psychology and Anthropology, University of Salamanca Salamanca, Spain
| | - Izaskun Ibabe
- Department of Social Psychology and Methodology of the Behavioral Sciences, University of the Basque Country San Sebastián, Spain
| |
Collapse
|
49
|
Swanner JK, Meissner CA, Atkinson DJ, Dianiska RE. Developing diagnostic, evidence-based approaches to interrogation. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2016. [DOI: 10.1016/j.jarmac.2016.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
50
|
Masip J, Blandón-Gitlin I, de la Riva C, Herrero C. An empirical test of the decision to lie component of the Activation-Decision-Construction-Action Theory (ADCAT). Acta Psychol (Amst) 2016; 169:45-55. [PMID: 27219533 DOI: 10.1016/j.actpsy.2016.05.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Revised: 05/06/2016] [Accepted: 05/09/2016] [Indexed: 10/21/2022] Open
Abstract
Meta-analyses reveal that behavioral differences between liars and truth tellers are small. To facilitate lie detection, researchers are currently developing interviewing approaches to increase these differences. Some of these approaches assume that lying is cognitively more difficult than truth telling; however, they are not based on specific cognitive theories of lie production, which are rare. Here we examined one existing theory, Walczyk et al.'s (2014) Activation-Decision-Construction-Action Theory (ADCAT). We tested the Decision component. According to ADCAT, people decide whether to lie or tell the truth as if they were using a specific mathematical formula to calculate the motivation to lie from (a) the probability of a number of outcomes derived from lying vs. telling the truth, and (b) the costs/benefits associated with each outcome. In this study, participants read several hypothetical scenarios and indicated whether they would lie or tell the truth in each scenario (Questionnaire 1). Next, they answered several questions about the consequences of lying vs. telling the truth in each scenario, and rated the probability and valence of each consequence (Questionnaire 2). Significant associations were found between the participants' dichotomous decision to lie/tell the truth in Questionnaire 1 and their motivation to lie scores calculated from the Questionnaire 2 data. However, interestingly, whereas the expected consequences of truth telling were associated with the decision to lie vs. tell the truth, the expected consequences of lying were not. Suggestions are made to refine ADCAT, which can be a useful theoretical framework to guide deception research.
Collapse
|