1
|
Lee S, Jung D. How does malingered PTSD affects continuous performance task performance? APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:1216-1224. [PMID: 36027606 DOI: 10.1080/23279095.2022.2115370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The purpose of this study was to determine how malingered PTSD behavior affects the performance of a continuous performance task (CPT). An analog trauma group, two malingering groups (with or without educational intervention), and a control group were organized according to simulation design. During the CPT, the numbers of errors and response time indicators along with post-error slowing (PES) and recovery (PER) process were measured. Results are as follows: First, the analog trauma group showed deficits of response inhibition and a higher level of PES compared to the control group. Second, malingered PTSD caused a significant number of errors, inconsistent performance, and no PES. Third, there was a significantly more impaired and inconsistent performance in the low level of knowledge of disability. Finally, a discriminant accuracy of more than 90% appeared in the discriminant analysis of all group comparison conditions. Taken together, the results of this study show that post-error behavior indicators are affected by malingered PTSD, and differences according to the degree of knowledge of PTSD can also be confirmed. These results are expected to be used as basic data for the development of tasks for the detection of malingerers in clinical scenes in the future.
Collapse
Affiliation(s)
- Sangil Lee
- Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology,Ulsan, Republic of Korea
| | - Dooyoung Jung
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
- Healthcare Center, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| |
Collapse
|
2
|
Finley JCA. Performance validity testing: the need for digital technology and where to go from here. Front Psychol 2024; 15:1452462. [PMID: 39193033 PMCID: PMC11347285 DOI: 10.3389/fpsyg.2024.1452462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 07/29/2024] [Indexed: 08/29/2024] Open
Affiliation(s)
- John-Christopher A. Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| |
Collapse
|
3
|
Orrù G, De Marchi B, Sartori G, Gemignani A, Scarpazza C, Monaro M, Mazza C, Roma P. Machine learning item selection for short scale construction: A proof-of-concept using the SIMS. Clin Neuropsychol 2023; 37:1371-1388. [PMID: 36017966 DOI: 10.1080/13854046.2022.2114548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/12/2022] [Indexed: 11/03/2022]
Abstract
ObjectiveThis proof-of-concept paper provides evidence to support machine learning (ML) as a valid alternative to traditional psychometric techniques in the development of short forms of longer parent psychological tests. ML comprises a variety of feature selection techniques that can be efficiently applied to identify the set of items that best replicates the characteristics of the original test. MethodsIn the present study, we integrated a dataset of 329 participants from published and unpublished datasets used in previous research on the Structured Inventory of Malingered Symptomatology (SIMS) to develop a short version of the scale. The SIMS is a multi-axial self-report questionnaire and a highly efficient psychometric measure of symptom validity, which is frequently applied in forensic settings. Results State-of-the-art ML item selection techniques achieved a 72% reduction in length while capturing 92% of the variance of the original SIMS. The new SIMS short form now consists of 21 items. ConclusionsThe results suggest that the proposed ML-based item selection technique represents a promising alternative to standard psychometric correlation-based methods (i.e. item selection, item response theory), especially when selection techniques (e.g. wrapper) are employed that evaluate global, rather than local, item value.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Barbara De Marchi
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | | | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Cristina Mazza
- Department of Neuroscience, Imaging and Clinical Sciences, G. d'Annunzio University of Chieti-Pescara, Chieti, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
4
|
Orrù G, Ordali E, Monaro M, Scarpazza C, Conversano C, Pietrini P, Gemignani A, Sartori G. Reconstructing individual responses to direct questions: a new method for reconstructing malingered responses. Front Psychol 2023; 14:1093854. [PMID: 37397336 PMCID: PMC10311065 DOI: 10.3389/fpsyg.2023.1093854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 05/22/2023] [Indexed: 07/04/2023] Open
Abstract
Introduction The false consensus effect consists of an overestimation of how common a subject opinion is among other people. This research demonstrates that individual endorsement of questions may be predicted by estimating peers' responses to the same question. Moreover, we aim to demonstrate how this prediction can be used to reconstruct the individual's response to a single item as well as the overall response to all of the items, making the technique suitable and effective for malingering detection. Method We have validated the procedure of reconstructing individual responses from peers' estimation in two separate studies, one addressing anxiety-related questions and the other to the Dark Triad. The questionnaires, adapted to our scopes, were submitted to the groups of participants for a total of 187 subjects across both studies. Machine learning models were used to estimate the results. Results According to the results, individual responses to a single question requiring a "yes" or "no" response are predicted with 70-80% accuracy. The overall participant-predicted score on all questions (total test score) is predicted with a correlation of 0.7-0.77 with actual results. Discussion The application of the false consensus effect format is a promising procedure for reconstructing truthful responses in forensic settings when the respondent is highly likely to alter his true (genuine) response and true responses to the tests are missing.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | | | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | | | - Ciro Conversano
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | | | - Angelo Gemignani
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
5
|
Orrù G, Piarulli A, Conversano C, Gemignani A. Human-like problem-solving abilities in large language models using ChatGPT. Front Artif Intell 2023; 6:1199350. [PMID: 37293238 PMCID: PMC10244637 DOI: 10.3389/frai.2023.1199350] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 05/09/2023] [Indexed: 06/10/2023] Open
Abstract
Backgrounds The field of Artificial Intelligence (AI) has seen a major shift in recent years due to the development of new Machine Learning (ML) models such as Generative Pre-trained Transformer (GPT). GPT has achieved previously unheard-of levels of accuracy in most computerized language processing tasks and their chat-based variations. Aim The aim of this study was to investigate the problem-solving abilities of ChatGPT using two sets of verbal insight problems, with a known performance level established by a sample of human participants. Materials and methods A total of 30 problems labeled as "practice problems" and "transfer problems" were administered to ChatGPT. ChatGPT's answers received a score of "0" for each incorrectly answered problem and a score of "1" for each correct response. The highest possible score for both the practice and transfer problems was 15 out of 15. The solution rate for each problem (based on a sample of 20 subjects) was used to assess and compare the performance of ChatGPT with that of human subjects. Results The study highlighted that ChatGPT can be trained in out-of-the-box thinking and demonstrated potential in solving verbal insight problems. The global performance of ChatGPT equalled the most probable outcome for the human sample in both practice problems and transfer problems as well as upon their combination. Additionally, ChatGPT answer combinations were among the 5% of most probable outcomes for the human sample both when considering practice problems and pooled problem sets. These findings demonstrate that ChatGPT performance on both set of problems was in line with the mean rate of success of human subjects, indicating that it performed reasonably well. Conclusions The use of transformer architecture and self-attention in ChatGPT may have helped to prioritize inputs while predicting, contributing to its potential in verbal insight problem-solving. ChatGPT has shown potential in solving insight problems, thus highlighting the importance of incorporating AI into psychological research. However, it is acknowledged that there are still open challenges. Indeed, further research is required to fully understand AI's capabilities and limitations in verbal problem-solving.
Collapse
|
6
|
Giromini L, Pasqualini S, Corgiat Loia A, Pignolo C, Di Girolamo M, Zennaro A. A Survey of Practices and Beliefs of Italian Psychologists Regarding Malingering and Symptom Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09452-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractA few years ago, an article describing the current status of Symptom Validity Assessment (SVA) practices and beliefs in European countries reported that there was little research activity in Italy (Merten et al., 2013). The same article also highlighted that Italian practitioners were less inclined to use Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in their assessments, compared with their colleagues from other major European countries. Considering that several articles on malingering and SVA have been published by Italian authors in recent years, we concluded that an update of the practices and beliefs of Italian professionals regarding malingering and SVA would be beneficial. Accordingly, from a larger survey that examined general psychological assessment practices and beliefs of Italian professionals, we extracted a subset of items specifically related to malingering and SVA and analyzed the responses of a sample of Italian psychologists who have some experience with malingering-related assessments. Taken together, the results of our analyses indicated that even though our respondents tend to use SVTs and PVTs relatively often in their evaluations, at this time, they likely trust more their own personal observations, impressions, and overall clinical judgment, in their SVA practice. Additionally, our results also indicated that Italian practitioners with some familiarity with malingering-related evaluations consider malingering to occur in about one-third of psychological assessments in which the evaluee might have an interest in overreporting.
Collapse
|
7
|
Ferrucci R, Mameli F, Ruggiero F, Reitano M, Miccoli M, Gemignani A, Conversano C, Dini M, Zago S, Piacentini S, Poletti B, Priori A, Orrù G. Alternate fluency in Parkinson’s disease: A machine learning analysis. PLoS One 2022; 17:e0265803. [PMID: 35320291 PMCID: PMC8942276 DOI: 10.1371/journal.pone.0265803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 03/08/2022] [Indexed: 11/18/2022] Open
Abstract
Objective
The aim of the present study was to investigate whether patients with Parkinson’s Disease (PD) had changes in their level of performance in extra-dimensional shifting by implementing a novel analysis method, utilizing the new alternate phonemic/semantic fluency test.
Method
We used machine learning (ML) in order to develop high accuracy classification between PD patients with high and low scores in the alternate fluency test.
Results
The models developed resulted to be accurate in such classification in a range between 80% and 90%. The predictor which demonstrated maximum efficiency in classifying the participants as low or high performers was the semantic fluency test. The optimal cut-off of a decision rule based on this test yielded an accuracy of 86.96%. Following the removal of the semantic fluency test from the system, the parameter which best contributed to the classification was the phonemic fluency test. The best cut-offs were identified and the decision rule yielded an overall accuracy of 80.43%. Lastly, in order to evaluate the classification accuracy based on the shifting index, the best cut-offs based on an optimal single rule yielded an overall accuracy of 83.69%.
Conclusion
We found that ML analysis of semantic and phonemic verbal fluency may be used to identify simple rules with high accuracy and good out of sample generalization, allowing the detection of executive deficits in patients with PD.
Collapse
Affiliation(s)
- Roberta Ferrucci
- Department of Health Sciences, Aldo Ravelli Research Center, University of Milan, Milan, Italy
- ASST-Santi Paolo e Carlo Hospital, Milan, Italy
- IRCCS Ca’ Granda Foundation, Policlinico of Milan, Milan, Italy
- * E-mail:
| | | | | | | | - Mario Miccoli
- Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Ciro Conversano
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Michelangelo Dini
- Department of Health Sciences, Aldo Ravelli Research Center, University of Milan, Milan, Italy
| | - Stefano Zago
- IRCCS Ca’ Granda Foundation, Policlinico of Milan, Milan, Italy
| | | | | | - Alberto Priori
- Department of Health Sciences, Aldo Ravelli Research Center, University of Milan, Milan, Italy
- ASST-Santi Paolo e Carlo Hospital, Milan, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| |
Collapse
|
8
|
Detecting Cognitive Impairment Status Using Keystroke Patterns and Physical Activity Data among the Older Adults: A Machine Learning Approach. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:1302989. [PMID: 34966518 PMCID: PMC8712156 DOI: 10.1155/2021/1302989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 11/11/2021] [Accepted: 11/19/2021] [Indexed: 11/18/2022]
Abstract
Cognitive impairment has a significantly negative impact on global healthcare and the community. Holding a person's cognition and mental retention among older adults is improbable with aging. Early detection of cognitive impairment will decline the most significant impact of extended disease to permanent mental damage. This paper aims to develop a machine learning model to detect and differentiate cognitive impairment categories like severe, moderate, mild, and normal by analyzing neurophysical and physical data. Keystroke and smartwatch have been used to extract individuals' neurophysical and physical data, respectively. An advanced ensemble learning algorithm named Gradient Boosting Machine (GBM) is proposed to classify the cognitive severity level (absence, mild, moderate, and severe) based on the Standardised Mini-Mental State Examination (SMMSE) questionnaire scores. The statistical method "Pearson's correlation" and the wrapper feature selection technique have been used to analyze and select the best features. Then, we have conducted our proposed algorithm GBM on those features. And the result has shown an accuracy of more than 94%. This paper has added a new dimension to the state-of-the-art to predict cognitive impairment by implementing neurophysical data and physical data together.
Collapse
|
9
|
Symptom and Performance Validity Assessment in European Countries: an Update. PSYCHOLOGICAL INJURY & LAW 2021; 15:116-127. [PMID: 34849185 PMCID: PMC8612718 DOI: 10.1007/s12207-021-09436-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/13/2021] [Indexed: 11/23/2022]
Abstract
In 2013, a special issue of the Spanish journal Clínica y Salud published a review on symptom and performance validity assessment in European countries (Merten et al. in Clínica y Salud, 24(3), 129–138, 2013). At that time, developments were judged to be in their infancy in many countries, with major publication activities stemming from only four countries: Spain, The Netherlands, Great Britain, and Germany. As an introduction to a special issue of Psychological Injury and Law, this is an updated report of developments during the last 10 years. In that period of time, research activities have reached a level where it is difficult to follow all developments; some validity measures were newly developed, others were adapted for European languages, and validity assessment has found a much stronger place in real-world evaluation contexts. Next to an update from the four nations mentioned above, reports are now given from Austria, Italy, and Switzerland, too.
Collapse
|
10
|
Facial Emotion Recognition Predicts Alexithymia Using Machine Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:2053795. [PMID: 34621306 PMCID: PMC8492233 DOI: 10.1155/2021/2053795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 09/05/2021] [Accepted: 09/07/2021] [Indexed: 11/17/2022]
Abstract
Objective Alexithymia, as a fundamental notion in the diagnosis of psychiatric disorders, is characterized by deficits in emotional processing and, consequently, difficulties in emotion recognition. Traditional tools for assessing alexithymia, which include interviews and self-report measures, have led to inconsistent results due to some limitations as insufficient insight. Therefore, the purpose of the present study was to propose a new screening tool that utilizes machine learning models based on the scores of facial emotion recognition task. Method In a cross-sectional study, 55 students of the University of Tabriz were selected based on the inclusion and exclusion criteria and their scores in the Toronto Alexithymia Scale (TAS-20). Then, they completed the somatization subscale of Symptom Checklist-90 Revised (SCL-90-R), Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II), and the facial emotion recognition (FER) task. Afterwards, support vector machine (SVM) and feedforward neural network (FNN) classifiers were implemented using K-fold cross validation to predict alexithymia, and the model performance was assessed with the area under the curve (AUC), accuracy, sensitivity, specificity, and F1-measure. Results The models yielded an accuracy range of 72.7–81.8% after feature selection and optimization. Our results suggested that ML models were able to accurately distinguish alexithymia and determine the most informative items for predicting alexithymia. Conclusion Our results show that machine learning models using FER task, SCL-90-R, BDI-II, and BAI could successfully diagnose alexithymia and also represent the most influential factors of predicting it and can be used as a clinical instrument to help clinicians in diagnosis process and earlier detection of the disorder.
Collapse
|
11
|
Scarpazza C, Miolla A, Zampieri I, Melis G, Sartori G, Ferracuti S, Pietrini P. Translational Application of a Neuro-Scientific Multi-Modal Approach Into Forensic Psychiatric Evaluation: Why and How? Front Psychiatry 2021; 12:597918. [PMID: 33613339 PMCID: PMC7892615 DOI: 10.3389/fpsyt.2021.597918] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Accepted: 01/14/2021] [Indexed: 01/01/2023] Open
Abstract
A prominent body of literature indicates that insanity evaluations, which are intended to provide influential expert reports for judges to reach a decision "beyond any reasonable doubt," suffer from a low inter-rater reliability. This paper reviews the limitations of the classical approach to insanity evaluation and the criticisms to the introduction of neuro-scientific approach in court. Here, we explain why in our opinion these criticisms, that seriously hamper the translational implementation of neuroscience into the forensic setting, do not survive scientific scrutiny. Moreover, we discuss how the neuro-scientific multimodal approach may improve the inter-rater reliability in insanity evaluation. Critically, neuroscience does not aim to introduce a brain-based concept of insanity. Indeed, criteria for responsibility and insanity are and should remain clinical. Rather, following the falsificationist approach and the convergence of evidence principle, the neuro-scientific multimodal approach is being proposed as a way to improve reliability of insanity evaluation and to mitigate the influence of cognitive biases on the formulation of insanity opinions, with the final aim to reduce errors and controversies.
Collapse
Affiliation(s)
- Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- Department of Psychosis Studies, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, United Kingdom
| | - Alessio Miolla
- Department of General Psychology, University of Padova, Padova, Italy
| | - Ilaria Zampieri
- Molecular Mind Laboratory, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giulia Melis
- Department of General Psychology, University of Padova, Padova, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padova, Padova, Italy
| | - Stefano Ferracuti
- Department of Human Neurosciences, “Sapienza” University of Rome, Rome, Italy
| | - Pietro Pietrini
- Molecular Mind Laboratory, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
12
|
Biondi S, Mazza C, Orrù G, Monaro M, Ferracuti S, Ricci E, Di Domenico A, Roma P. Interrogative suggestibility in the elderly. PLoS One 2020; 15:e0241353. [PMID: 33196666 PMCID: PMC7668574 DOI: 10.1371/journal.pone.0241353] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 10/13/2020] [Indexed: 11/23/2022] Open
Abstract
Interrogative suggestibility (IS) describes the extent to which an individual behavioral response is affected by messages communicated during formal questioning within a closed social interaction. The present study aimed at improving knowledge about IS in the elderly (aged 65 years and older), in particular about its association with both emotive/affective and cognitive variables. The sample (N = 172) was divided into three groups on the basis of age: late adult (aged 55-64, N = 59), young elderly (aged 65-74, N = 63), and elderly (aged 75 and older, N = 50). Cognitive (i.e., Kaufman Brief Intelligence Test-2, Rey Auditory Verbal Learning Test), emotive/affective (i.e., Rosenberg Self-Esteem Scale, Marlowe-Crowne Social Desirability Scale, Penn State Worry Questionnaire) and suggestibility measures (i.e., Gudjonsson Suggestibility Scale-2) were administered. In order to identify differences and associations between groups in IS, cognitive and emotive/affective variables, ANOVAs tests and Pearson's correlations were run. Furthermore, moderation analyses and hierarchical regression were set to determine whether age, cognitive and emotive/affective variables predicted IS components (i.e., Yield and Shift). Finally, machine learning models were developed to highlight the best strategy for classifying elderly subjects with high suggestibility. The results corroborated the significant link between IS and age, showing that elderly participants had the worst performance on all suggestibility indexes. Age was also the most important predictor of both Yield and Shift. Results also confirmed the important role of non-verbal intelligence and memory impairment in explaining IS dimensions, showing that these associations were stronger in young elderly and elderly groups. Implications about interrogative procedures with older adults were discussed.
Collapse
Affiliation(s)
- Silvia Biondi
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Cristina Mazza
- Department of Neuroscience, Imaging and Clinical Sciences, G. d’Annunzio University of Chieti-Pescara, Chieti, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padova, Padova, Italy
| | - Stefano Ferracuti
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Eleonora Ricci
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Alberto Di Domenico
- Department of Psychological, Health, and Territorial Sciences, G. d’Annunzio University of Chieti-Pescara, Chieti, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
13
|
Orrù G, Mazza C, Monaro M, Ferracuti S, Sartori G, Roma P. The Development of a Short Version of the SIMS Using Machine Learning to Detect Feigning in Forensic Assessment. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09389-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractIn the present study, we applied machine learning techniques to evaluate whether the Structured Inventory of Malingered Symptomatology (SIMS) can be reduced in length yet maintain accurate discrimination between consistent participants (i.e., presumed truth tellers) and symptom producers. We applied machine learning item selection techniques on data from Mazza et al. (2019c) to identify the minimum number of original SIMS items that could accurately distinguish between consistent participants, symptom accentuators, and symptom producers in real personal injury cases. Subjects were personal injury claimants who had undergone forensic assessment, which is known to incentivize malingering and symptom accentuation. Item selection yielded short versions of the scale with as few as 8 items (to differentiate between consistent participants and symptom producers) and as many as 10 items (to differentiate between consistent and inconsistent participants). The scales had higher classification accuracy than the original SIMS and did not show the bias that was originally reported between false positives and false negatives.
Collapse
|
14
|
Orrù G, Gemignani A, Ciacchini R, Bazzichi L, Conversano C. Machine Learning Increases Diagnosticity in Psychometric Evaluation of Alexithymia in Fibromyalgia. Front Med (Lausanne) 2020; 6:319. [PMID: 31998737 PMCID: PMC6970411 DOI: 10.3389/fmed.2019.00319] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 12/16/2019] [Indexed: 12/21/2022] Open
Abstract
Here, we report an investigation on the accuracy of the Toronto Alexithymia Scale, a measure to assess alexithymia, a multidimensional construct often associate to fibromyalgia. Two groups of participants, patients with fibromyalgia (n = 38), healthy controls (n = 38) were administered the Toronto Alexithymia Scale and background tests. Machine learning models achieved an overall accuracy higher than 80% in detecting both patients with fibromyalgia and healthy controls. The parameter which alone has demonstrated maximum efficiency in classifying the single subject within the two groups has been the item 3 of the alexithymia scale. The analysis of the most informative features, based on all scales administered, revealed that item 3 and 13 of the alexithymia questionnaire and the visual analog scale scores were the most informative attributes in correctly classifying participants (accuracy above 85%). An additional analyses using only the alexithymia scale subset of items and the visual analog scale scores has shown that the predictors which efficiently classified patients with fibromyalgia and controls were the item 3 and 7 (accuracy = 85.53%). Our findings suggest that machine learning models analysis based on the Toronto Alexithymia Scale subset of items scores accurately distinguish patients with fibromyalgia from healthy controls.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Rebecca Ciacchini
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Laura Bazzichi
- Rheumatology Unit, Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Ciro Conversano
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| |
Collapse
|
15
|
Orrù G, Monaro M, Conversano C, Gemignani A, Sartori G. Machine Learning in Psychometrics and Psychological Research. Front Psychol 2020; 10:2970. [PMID: 31998200 PMCID: PMC6966768 DOI: 10.3389/fpsyg.2019.02970] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 12/16/2019] [Indexed: 11/28/2022] Open
Abstract
Recent controversies about the level of replicability of behavioral research analyzed using statistical inference have cast interest in developing more efficient techniques for analyzing the results of psychological experiments. Here we claim that complementing the analytical workflow of psychological experiments with Machine Learning-based analysis will both maximize accuracy and minimize replicability issues. As compared to statistical inference, ML analysis of experimental data is model agnostic and primarily focused on prediction rather than inference. We also highlight some potential pitfalls resulting from adoption of Machine Learning based experiment analysis. If not properly used it can lead to over-optimistic accuracy estimates similarly observed using statistical inference. Remedies to such pitfalls are also presented such and building model based on cross validation and the use of ensemble models. ML models are typically regarded as black boxes and we will discuss strategies aimed at rendering more transparent the predictions.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Ciro Conversano
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
16
|
Mazza C, Orrù G, Burla F, Monaro M, Ferracuti S, Colasanti M, Roma P. Indicators to distinguish symptom accentuators from symptom producers in individuals with a diagnosed adjustment disorder: A pilot study on inconsistency subtypes using SIMS and MMPI-2-RF. PLoS One 2019; 14:e0227113. [PMID: 31887214 PMCID: PMC6936836 DOI: 10.1371/journal.pone.0227113] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 12/11/2019] [Indexed: 11/17/2022] Open
Abstract
In the context of legal damage evaluations, evaluees may exaggerate or simulate symptoms in an attempt to obtain greater economic compensation. To date, practitioners and researchers have focused on detecting malingering behavior as an exclusively unitary construct. However, we argue that there are two types of inconsistent behavior that speak to possible malingering-accentuating (i.e., exaggerating symptoms that are actually experienced) and simulating (i.e., fabricating symptoms entirely)-each with its own unique attributes; thus, it is necessary to distinguish between them. The aim of the present study was to identify objective indicators to differentiate symptom accentuators from symptom producers and consistent participants. We analyzed the Structured Inventory of Malingered Symptomatology scales and the Minnesota Multiphasic Personality Inventory-2 Restructured Form validity scales of 132 individuals with a diagnosed adjustment disorder with mixed anxiety and depressed mood who had undergone assessment for psychiatric/psychological damage. The results indicated that the SIMS Total Score, Neurologic Impairment and Low Intelligence scales and the MMPI-2-RF Infrequent Responses (F-r) and Response Bias (RBS) scales successfully discriminated among symptom accentuators, symptom producers, and consistent participants. Machine learning analysis was used to identify the most efficient parameter for classifying these three groups, recognizing the SIMS Total Score as the best indicator.
Collapse
Affiliation(s)
- Cristina Mazza
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Franco Burla
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padova, Padova, Italy
| | - Stefano Ferracuti
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Marco Colasanti
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| |
Collapse
|