1
|
Aslan M, Baykara M, Alakus TB. LieWaves: dataset for lie detection based on EEG signals and wavelets. Med Biol Eng Comput 2024; 62:1571-1588. [PMID: 38311647 DOI: 10.1007/s11517-024-03021-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/09/2024] [Indexed: 02/06/2024]
Abstract
This study introduces an electroencephalography (EEG)-based dataset to analyze lie detection. Various analyses or detections can be performed using EEG signals. Lie detection using EEG data has recently become a significant topic. In every aspect of life, people find the need to tell lies to each other. While lies told daily may not have significant societal impacts, lie detection becomes crucial in legal, security, job interviews, or situations that could affect the community. This study aims to obtain EEG signals for lie detection, create a dataset, and analyze this dataset using signal processing techniques and deep learning methods. EEG signals were acquired from 27 individuals using a wearable EEG device called Emotiv Insight with 5 channels (AF3, T7, Pz, T8, AF4). Each person took part in two trials: one where they were honest and another where they were deceitful. During each experiment, participants evaluated beads they saw before the experiment and stole from them in front of a video clip. This study consisted of four stages. In the first stage, the LieWaves dataset was created with the EEG data obtained during these experiments. In the second stage, preprocessing was carried out. In this stage, the automatic and tunable artifact removal (ATAR) algorithm was applied to remove the artifacts from the EEG signals. Later, the overlapping sliding window (OSW) method was used for data augmentation. In the third stage, feature extraction was performed. To achieve this, EEG signals were analyzed by combining discrete wavelet transform (DWT) and fast Fourier transform (FFT) including statistical methods (SM). In the last stage, each obtained feature vector was classified separately using Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and CNNLSTM hybrid algorithms. At the study's conclusion, the most accurate result, achieving a 99.88% accuracy score, was produced using the LSTM and DWT techniques. With this study, a new data set was introduced to the literature, and it was aimed to eliminate the deficiencies in this field with this data set. Evaluation results obtained from the data set have shown that this data set can be effective in this field.
Collapse
Affiliation(s)
- Musa Aslan
- Department of Software Engineering, Karadeniz Technical University, Trabzon, Turkey
| | - Muhammet Baykara
- Department of Software Engineering, Firat University, Elazig, Turkey
| | - Talha Burak Alakus
- Department of Software Engineering, Kirklareli University, Kirklareli, Turkey.
| |
Collapse
|
2
|
Chen C, Fan L, Gao Y, Qiu S, Wei W, He H. EEG-FRM: a neural network based familiar and unfamiliar face EEG recognition method. Cogn Neurodyn 2024; 18:357-370. [PMID: 38699605 PMCID: PMC11061081 DOI: 10.1007/s11571-024-10073-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 12/28/2023] [Accepted: 01/23/2024] [Indexed: 05/05/2024] Open
Abstract
Recognizing familiar faces holds great value in various fields such as medicine, criminal investigation, and lie detection. In this paper, we designed a Complex Trial Protocol-based familiar and unfamiliar face recognition experiment that using self-face information, and collected EEG data from 147 subjects. A novel neural network-based method, the EEG-based Face Recognition Model (EEG-FRM), is proposed in this paper for cross-subject familiar/unfamiliar face recognition, which combines a multi-scale convolutional classification network with the maximum probability mechanism to realize individual face recognition. The multi-scale convolutional neural network extracts temporal information and spatial features from the EEG data, the attention module and supervised contrastive learning module are employed to promote the classification performance. Experimental results on the dataset reveal that familiar face stimuli could evoke significant P300 responses, mainly concentrated in the parietal lobe and nearby regions. Our proposed model achieved impressive results, with a balanced accuracy of 85.64%, a true positive rate of 73.23%, and a false positive rate of 1.96% on the collected dataset, outperforming other compared methods. The experimental results demonstrate the effectiveness and superiority of our proposed model.
Collapse
Affiliation(s)
- Chao Chen
- Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Lingfeng Fan
- Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China
| | - Ying Gao
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
| | - Shuang Qiu
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- University of Chinese Academy of Sciences, Beijing, China
| | - Wei Wei
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
| | - Huiguang He
- Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Bhatt P, Sethi A, Tasgaonkar V, Shroff J, Pendharkar I, Desai A, Sinha P, Deshpande A, Joshi G, Rahate A, Jain P, Walambe R, Kotecha K, Jain NK. Machine learning for cognitive behavioral analysis: datasets, methods, paradigms, and research directions. Brain Inform 2023; 10:18. [PMID: 37524933 PMCID: PMC10390406 DOI: 10.1186/s40708-023-00196-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 06/06/2023] [Indexed: 08/02/2023] Open
Abstract
Human behaviour reflects cognitive abilities. Human cognition is fundamentally linked to the different experiences or characteristics of consciousness/emotions, such as joy, grief, anger, etc., which assists in effective communication with others. Detection and differentiation between thoughts, feelings, and behaviours are paramount in learning to control our emotions and respond more effectively in stressful circumstances. The ability to perceive, analyse, process, interpret, remember, and retrieve information while making judgments to respond correctly is referred to as Cognitive Behavior. After making a significant mark in emotion analysis, deception detection is one of the key areas to connect human behaviour, mainly in the forensic domain. Detection of lies, deception, malicious intent, abnormal behaviour, emotions, stress, etc., have significant roles in advanced stages of behavioral science. Artificial Intelligence and Machine learning (AI/ML) has helped a great deal in pattern recognition, data extraction and analysis, and interpretations. The goal of using AI and ML in behavioral sciences is to infer human behaviour, mainly for mental health or forensic investigations. The presented work provides an extensive review of the research on cognitive behaviour analysis. A parametric study is presented based on different physical characteristics, emotional behaviours, data collection sensing mechanisms, unimodal and multimodal datasets, modelling AI/ML methods, challenges, and future research directions.
Collapse
Affiliation(s)
- Priya Bhatt
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Amanrose Sethi
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Vaibhav Tasgaonkar
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Jugal Shroff
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Isha Pendharkar
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Aditya Desai
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Pratyush Sinha
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Aditya Deshpande
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Gargi Joshi
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Anil Rahate
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Priyanka Jain
- Centre for Development of Advanced Computing (C-DAC), Delhi, India
| | - Rahee Walambe
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India.
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International Deemed University, Pune, India.
| | - Ketan Kotecha
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India.
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International Deemed University, Pune, India.
- UCSI University, Kuala Lumpur, Malaysia.
| | - N K Jain
- Centre for Development of Advanced Computing (C-DAC), Delhi, India
| |
Collapse
|
4
|
Fernisha SR, Christopher CS, Lyernisha SR. Slender Swarm Flamingo optimization-based residual low-light image enhancement network. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2161156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Affiliation(s)
- S. R. Fernisha
- Information and Communication Engineering, St. Xaviers Catholic College of Engineering, Nagercoil, India
| | - C. Seldev Christopher
- Computer Science and Engineering, St Xaviers Catholic College of Engineering, Nagercoil, India
| | - S. R. Lyernisha
- Information and Communication Engineering, St. Xaviers Catholic College of Engineering, Nagercoil, India
| |
Collapse
|
5
|
Alaskar H, Sbaï Z, Khan W, Hussain A, Alrawais A. Intelligent techniques for deception detection: a survey and critical study. Soft comput 2022. [DOI: 10.1007/s00500-022-07603-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
6
|
Aung ST, Hassan M, Brady M, Mannan ZI, Azam S, Karim A, Zaman S, Wongsawat Y. Entropy-Based Emotion Recognition from Multichannel EEG Signals Using Artificial Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6000989. [PMID: 36275950 PMCID: PMC9584707 DOI: 10.1155/2022/6000989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 09/22/2022] [Indexed: 11/17/2022]
Abstract
Humans experience a variety of emotions throughout the course of their daily lives, including happiness, sadness, and rage. As a result, an effective emotion identification system is essential for electroencephalography (EEG) data to accurately reflect emotion in real-time. Although recent studies on this problem can provide acceptable performance measures, it is still not adequate for the implementation of a complete emotion recognition system. In this research work, we propose a new approach for an emotion recognition system, using multichannel EEG calculation with our developed entropy known as multivariate multiscale modified-distribution entropy (MM-mDistEn) which is combined with a model based on an artificial neural network (ANN) to attain a better outcome over existing methods. The proposed system has been tested with two different datasets and achieved better accuracy than existing methods. For the GAMEEMO dataset, we achieved an average accuracy ± standard deviation of 95.73% ± 0.67 for valence and 96.78% ± 0.25 for arousal. Moreover, the average accuracy percentage for the DEAP dataset reached 92.57% ± 1.51 in valence and 80.23% ± 1.83 in arousal.
Collapse
Affiliation(s)
- Si Thu Aung
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Salaya, Thailand
| | - Mehedi Hassan
- Computer Science and Engineering, North Western University, Khulna, Bangladesh
| | - Mark Brady
- Asia Pacific College of Business and Law, Charles Darwin University, Casuarina, NT, Australia
| | - Zubaer Ibna Mannan
- Department of Smart Computing, Kyungdong University, Global Campus, Goseong-Gun, Republic of Korea
| | - Sami Azam
- College of Engineering IT and Environment, Charles Darwin University, Casuarina, NT, Australia
| | - Asif Karim
- College of Engineering IT and Environment, Charles Darwin University, Casuarina, NT, Australia
| | - Sadika Zaman
- Computer Science and Engineering, North Western University, Khulna, Bangladesh
| | - Yodchanan Wongsawat
- Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Salaya, Thailand
| |
Collapse
|
7
|
Benchmarks for machine learning in depression discrimination using electroencephalography signals. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
Karnati M, Seal A, Sahu G, Yazidi A, Krejcar O. A novel multi-scale based deep convolutional neural network for detecting COVID-19 from X-rays. Appl Soft Comput 2022; 125:109109. [PMID: 35693544 PMCID: PMC9167691 DOI: 10.1016/j.asoc.2022.109109] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 04/26/2022] [Accepted: 05/26/2022] [Indexed: 11/23/2022]
Abstract
The COVID-19 pandemic has posed an unprecedented threat to the global public health system, primarily infecting the airway epithelial cells in the respiratory tract. Chest X-ray (CXR) is widely available, faster, and less expensive therefore it is preferred to monitor the lungs for COVID-19 diagnosis over other techniques such as molecular test, antigen test, antibody test, and chest computed tomography (CT). As the pandemic continues to reveal the limitations of our current ecosystems, researchers are coming together to share their knowledge and experience in order to develop new systems to tackle it. In this work, an end-to-end IoT infrastructure is designed and built to diagnose patients remotely in the case of a pandemic, limiting COVID-19 dissemination while also improving measurement science. The proposed framework comprises six steps. In the last step, a model is designed to interpret CXR images and intelligently measure the severity of COVID-19 lung infections using a novel deep neural network (DNN). The proposed DNN employs multi-scale sampling filters to extract reliable and noise-invariant features from a variety of image patches. Experiments are conducted on five publicly available databases, including COVIDx, COVID-19 Radiography, COVID-XRay-5K, COVID-19-CXR, and COVIDchestxray, with classification accuracies of 96.01%, 99.62%, 99.22%, 98.83%, and 100%, and testing times of 0.541, 0.692, 1.28, 0.461, and 0.202 s, respectively. The obtained results show that the proposed model surpasses fourteen baseline techniques. As a result, the newly developed model could be utilized to evaluate treatment efficacy, particularly in remote locations.
Collapse
Affiliation(s)
- Mohan Karnati
- Department of Computer Science and Engineering, PDPM Indian Institute of Information Technology Design & Manufacturing Jabalpur, Jabalpur, Madhya Pradesh 482005, India
| | - Ayan Seal
- Department of Computer Science and Engineering, PDPM Indian Institute of Information Technology Design & Manufacturing Jabalpur, Jabalpur, Madhya Pradesh 482005, India
| | - Geet Sahu
- Department of Computer Science and Engineering, PDPM Indian Institute of Information Technology Design & Manufacturing Jabalpur, Jabalpur, Madhya Pradesh 482005, India
| | - Anis Yazidi
- Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, 460167, Norway
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, 460167, Norway
- Department of Plastic and Reconstructive Surgery, Oslo University Hospital, Oslo, 460167, Norway
| | - Ondrej Krejcar
- Center for Basic and Applied Science, Faculty of Informatics and Management, University of Hradec Kralove, Rokitanskeho 62, 500 03 Hradec Kralove, Czech Republic
- Malaysia-Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Jalan Sultan Yahya Petra, 54100 Kuala Lumpur, Malaysia
| |
Collapse
|
9
|
Effective attention feature reconstruction loss for facial expression recognition in the wild. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07016-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|