1
|
Tu M. Named entity recognition and emotional viewpoint monitoring in online news using artificial intelligence. PeerJ Comput Sci 2024; 10:e1715. [PMID: 38259884 PMCID: PMC10803082 DOI: 10.7717/peerj-cs.1715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/01/2023] [Indexed: 01/24/2024]
Abstract
Network news is an important way for netizens to get social information. Massive news information hinders netizens to get key information. Named entity recognition technology under artificial background can realize the classification of place, date and other information in text information. This article combines named entity recognition and deep learning technology. Specifically, the proposed method introduces an automatic annotation approach for Chinese entity triggers and a Named Entity Recognition (NER) model that can achieve high accuracy with a small number of training data sets. The method jointly trains sentence and trigger vectors through a trigger-matching network, utilizing the trigger vectors as attention queries for subsequent sequence annotation models. Furthermore, the proposed method employs entity labels to effectively recognize neologisms in web news, enabling the customization of the set of sensitive words and the number of words within the set to be detected, as well as extending the web news word sentiment lexicon for sentiment observation. Experimental results demonstrate that the proposed model outperforms the traditional BiLSTM-CRF model, achieving superior performance with only a 20% proportional training data set compared to the 40% proportional training data set required by the conventional model. Moreover, the loss function curve shows that my model exhibits better accuracy and faster convergence speed than the compared model. Finally, my model achieves an average accuracy rate of 97.88% in sentiment viewpoint detection.
Collapse
Affiliation(s)
- Manzi Tu
- School of Humanities and Communication, Hubei University of Science and Technology, Xianning, Hubei, China
| |
Collapse
|
2
|
Claret AF, Casali KR, Cunha TS, Moraes MC. Automatic Classification of Emotions Based on Cardiac Signals: A Systematic Literature Review. Ann Biomed Eng 2023; 51:2393-2414. [PMID: 37543539 DOI: 10.1007/s10439-023-03341-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 07/28/2023] [Indexed: 08/07/2023]
Abstract
Emotions play a pivotal role in human cognition, exerting influence across diverse domains of individuals' lives. The widespread adoption of artificial intelligence and machine learning has spurred interest in systems capable of automatically recognizing and classifying emotions and affective states. However, the accurate identification of human emotions remains a formidable challenge, as they are influenced by various factors and accompanied by physiological changes. Numerous solutions have emerged to enable emotion recognition, leveraging the characterization of biological signals, including the utilization of cardiac signals acquired from low-cost and wearable sensors. The objective of this work was to comprehensively investigate the current trends in the field by conducting a Systematic Literature Review (SLR) that focuses specifically on the detection, recognition, and classification of emotions based on cardiac signals, to gain insights into the prevailing techniques employed for signal acquisition, the extracted features, the elicitation process, and the classification methods employed in these studies. A SLR was conducted using four research databases, and articles were assessed concerning the proposed research questions. Twenty seven articles met the selection criteria and were assessed for the feasibility of using cardiac signals, acquired from low-cost and wearable devices, for emotion recognition. Several emotional elicitation methods were found in the literature, including the algorithms applied for automatic classification, as well as the key challenges associated with emotion recognition relying solely on cardiac signals. This study extends the current body of knowledge and enables future research by providing insights into suitable techniques for designing automatic emotion recognition applications. It emphasizes the importance of utilizing low-cost, wearable, and unobtrusive devices to acquire cardiac signals for accurate and accessible emotion recognition.
Collapse
Affiliation(s)
- Anderson Faria Claret
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| | - Karina Rabello Casali
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| | - Tatiana Sousa Cunha
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil.
| | - Matheus Cardoso Moraes
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| |
Collapse
|
3
|
Nandini D, Yadav J, Rani A, Singh V. Design of subject independent 3D VAD emotion detection system using EEG signals and machine learning algorithms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
4
|
Mahrukh R, Shakil S, Malik AS. Sentiments analysis of fMRI using automatically generated stimuli labels under naturalistic paradigm. Sci Rep 2023; 13:7267. [PMID: 37142654 PMCID: PMC10160115 DOI: 10.1038/s41598-023-33734-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/18/2023] [Indexed: 05/06/2023] Open
Abstract
Our emotions and sentiments are influenced by naturalistic stimuli such as the movies we watch and the songs we listen to, accompanied by changes in our brain activation. Comprehension of these brain-activation dynamics can assist in identification of any associated neurological condition such as stress and depression, leading towards making informed decision about suitable stimuli. A large number of open-access functional magnetic resonance imaging (fMRI) datasets collected under naturalistic conditions can be used for classification/prediction studies. However, these datasets do not provide emotion/sentiment labels, which limits their use in supervised learning studies. Manual labeling by subjects can generate these labels, however, this method is subjective and biased. In this study, we are proposing another approach of generating automatic labels from the naturalistic stimulus itself. We are using sentiment analyzers (VADER, TextBlob, and Flair) from natural language processing to generate labels using movie subtitles. Subtitles generated labels are used as the class labels for positive, negative, and neutral sentiments for classification of brain fMRI images. Support vector machine, random forest, decision tree, and deep neural network classifiers are used. We are getting reasonably good classification accuracy (42-84%) for imbalanced data, which is increased (55-99%) for balanced data.
Collapse
Affiliation(s)
| | - Sadia Shakil
- Institute of Space Technology, Islamabad, Pakistan.
- Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic.
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia.
| | - Aamir Saeed Malik
- Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic.
| |
Collapse
|
5
|
Singh J, Arya R. Examining the relationship of personality traits with online teaching using emotive responses and physiological signals. EDUCATION AND INFORMATION TECHNOLOGIES 2023; 28:1-27. [PMID: 36818432 PMCID: PMC9925935 DOI: 10.1007/s10639-023-11619-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 01/23/2023] [Indexed: 06/18/2023]
Abstract
In the education sector, there is a rapid increase in using online teaching and learning scenarios. Making these scenarios more effective is the main purpose of this study. Though there are a lot of factors that affect it, however, the primary focus is to find out the relationship between a teacher's personality and their liking for online teaching. To conduct the study, a framework has been proposed which is a mixed design of self-reported (emotions and personality) data and physiological responses of a teacher. In self-reported data, along with teachers, learners' perception of a teacher's personality is also considered which explores their relationship with online teaching. The final results reveal that teachers with a high level of agreeableness, conscientiousness, and openness personality traits are more comfortable with online teaching as compared to extraversion and neuroticism traits. To validate the self-reported data analysis, the physiological responses of teachers were recorded that ensure the authenticity of the collected data. It also ensures that the physiological responses along with emotions are also good indicators of personality recognition.
Collapse
Affiliation(s)
- Jaiteg Singh
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab India
| | - Resham Arya
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab India
| |
Collapse
|
6
|
Multitasking of sentiment detection and emotion recognition in code-mixed Hinglish data. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.110182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
7
|
Tan X, Fan Y, Sun M, Zhuang M, Qu F. An Emotion Index Estimation based on Facial Action Unit Prediction. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
8
|
Affective State Recognition in Livestock—Artificial Intelligence Approaches. Animals (Basel) 2022; 12:ani12060759. [PMID: 35327156 PMCID: PMC8944789 DOI: 10.3390/ani12060759] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/15/2022] [Accepted: 03/16/2022] [Indexed: 12/21/2022] Open
Abstract
Simple Summary Emotions or affective states recognition in farm animals is an underexplored research domain. Despite significant advances in animal welfare research, animal affective state computing through the development and application of devices and platforms that can not only recognize but interpret and process the emotions, are in a nascent stage. The analysis and measurement of unique behavioural, physical, and biological characteristics offered by biometric sensor technologies and the affiliated complex and large data sets, opens the pathway for novel and realistic identification of individual animals amongst a herd or a flock. By capitalizing on the immense potential of biometric sensors, artificial intelligence enabled big data methods offer substantial advancement of animal welfare standards and meet the urgent needs of caretakers to respond effectively to maintain the wellbeing of their animals. Abstract Farm animals, numbering over 70 billion worldwide, are increasingly managed in large-scale, intensive farms. With both public awareness and scientific evidence growing that farm animals experience suffering, as well as affective states such as fear, frustration and distress, there is an urgent need to develop efficient and accurate methods for monitoring their welfare. At present, there are not scientifically validated ‘benchmarks’ for quantifying transient emotional (affective) states in farm animals, and no established measures of good welfare, only indicators of poor welfare, such as injury, pain and fear. Conventional approaches to monitoring livestock welfare are time-consuming, interrupt farming processes and involve subjective judgments. Biometric sensor data enabled by artificial intelligence is an emerging smart solution to unobtrusively monitoring livestock, but its potential for quantifying affective states and ground-breaking solutions in their application are yet to be realized. This review provides innovative methods for collecting big data on farm animal emotions, which can be used to train artificial intelligence models to classify, quantify and predict affective states in individual pigs and cows. Extending this to the group level, social network analysis can be applied to model emotional dynamics and contagion among animals. Finally, ‘digital twins’ of animals capable of simulating and predicting their affective states and behaviour in real time are a near-term possibility.
Collapse
|
9
|
VR-PEER: A Personalized Exer-Game Platform Based on Emotion Recognition. ELECTRONICS 2022. [DOI: 10.3390/electronics11030455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Motor rehabilitation exercises require recurrent repetitions to enhance patients’ gestures. However, these repetitive gestures usually decrease the patients’ motivation and stress them. Virtual Reality (VR) exer-games (serious games in general) could be an alternative solution to address the problem. This innovative technology encourages patients to train different gestures with less effort since they are totally immersed in an easy to play exer-game. Despite this evolution, patients, with available exer-games, still suffer in performing their gestures correctly without pain. The developed applications do not consider the patients psychological states when playing an exer-game. Therefore, we believe that is necessary to develop personalized and adaptive exer-games that take into consideration the patients’ emotions during rehabilitation exercises. This paper proposed a VR-PEER adaptive exer-game system based on emotion recognition. The platform contain three main modules: (1) computing and interpretation module, (2) emotion recognition module, (3) adaptation module. Furthermore, a virtual reality-based serious game is developed as a case study, that uses updated facial expression data and provides dynamically the patient’s appropriate game to play during rehabilitation exercises. An experimental study has been conducted on fifteen subjects who expressed the usefulness of the proposed system in motor rehabilitation process.
Collapse
|
10
|
A Technology Acceptance Model-Based Analytics for Online Mobile Games Using Machine Learning Techniques. Symmetry (Basel) 2021. [DOI: 10.3390/sym13081545] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
In recent years, the enhancement in technology has been envisioning for people to complete tasks in an easier way. Every manufacturing industry requires heavy machinery to accomplish tasks in a symmetric and systematic way, which is much easier with the help of advancement in the technology. The technological advancement directly affects human life as a result. It is found that humans are now fully dependent on it. The online game industry is one example of technology breakthrough. It is now a prominent industry to develop online games at world level. In this paper, our main objective is to analyze major factors which encourage mobile games industry to expand. Analyzing the system and symmetric relations inside can be done into two phases. The first phase is through a TAM Model, which is a very efficient way to solve statistical problems, and the second phase is with machine learning (ML) techniques, such as SVM, logistic regression, etc. Both strategies are popular and efficient in analyzing a system while maintaining the symmetry in a better way. Therefore, according to results from both the TAM model and ML approach, it is clear that perceived usefulness, attitude, and symmetric flow are important factors for game industry. The analytics provide a clear insight that perceived usefulness is an important parameter over behavior intention for the online mobile game industry.
Collapse
|
11
|
Ferreira CP, González-González CS, Adamatti DF. Business Simulation Games Analysis Supported by Human-Computer Interfaces: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:4810. [PMID: 34300549 PMCID: PMC8309693 DOI: 10.3390/s21144810] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/04/2021] [Accepted: 07/09/2021] [Indexed: 11/16/2022]
Abstract
This article performs a Systematic Review of studies to answer the question: What are the researches related to the learning process with (Serious) Business Games using data collection techniques with Electroencephalogram or Eye tracking signals? The PRISMA declaration method was used to guide the search and inclusion of works related to the elaboration of this study. The 19 references resulting from the critical evaluation initially point to a gap in investigations into using these devices to monitor serious games for learning in organizational environments. An approximation with equivalent sensing studies in serious games for the contribution of skills and competencies indicates that continuous monitoring measures, such as mental state and eye fixation, proved to identify the players' attention levels effectively. Also, these studies showed effectiveness in the flow at different moments of the task, motivating and justifying the replication of these studies as a source of insights for the optimized design of business learning tools. This study is the first systematic review and consolidates the existing literature on user experience analysis of business simulation games supported by human-computer interfaces.
Collapse
Affiliation(s)
- Cleiton Pons Ferreira
- Research and Innovation Department, Instituto Federal de Educação, Ciência e Tecnologia do Rio Grande do Sul, Rio Grande 96201-460, Brazil
- Computer Engineering and Systems Department, Universidad de La Laguna, Avda. Astrofísico F. Sanchez s/n, 38204 La Laguna, Tenerife, Spain;
- Centro de Ciências Computacionais, Universidade Federal do Rio Grande, Av. Itália, s/n, km 8-Carreiros, Rio Grande 96203-900, Brazil;
| | - Carina Soledad González-González
- Computer Engineering and Systems Department, Universidad de La Laguna, Avda. Astrofísico F. Sanchez s/n, 38204 La Laguna, Tenerife, Spain;
| | - Diana Francisca Adamatti
- Centro de Ciências Computacionais, Universidade Federal do Rio Grande, Av. Itália, s/n, km 8-Carreiros, Rio Grande 96203-900, Brazil;
| |
Collapse
|
12
|
Petrescu L, Petrescu C, Oprea A, Mitruț O, Moise G, Moldoveanu A, Moldoveanu F. Machine Learning Methods for Fear Classification Based on Physiological Features. SENSORS 2021; 21:s21134519. [PMID: 34282759 PMCID: PMC8271969 DOI: 10.3390/s21134519] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/29/2021] [Accepted: 06/29/2021] [Indexed: 12/22/2022]
Abstract
This paper focuses on the binary classification of the emotion of fear, based on the physiological data and subjective responses stored in the DEAP dataset. We performed a mapping between the discrete and dimensional emotional information considering the participants’ ratings and extracted a substantial set of 40 types of features from the physiological data, which represented the input to various machine learning algorithms—Decision Trees, k-Nearest Neighbors, Support Vector Machine and artificial networks—accompanied by dimensionality reduction, feature selection and the tuning of the most relevant hyperparameters, boosting classification accuracy. The methodology we approached included tackling different situations, such as resolving the problem of having an imbalanced dataset through data augmentation, reducing overfitting, computing various metrics in order to obtain the most reliable classification scores and applying the Local Interpretable Model-Agnostic Explanations method for interpretation and for explaining predictions in a human-understandable manner. The results show that fear can be predicted very well (accuracies ranging from 91.7% using Gradient Boosting Trees to 93.5% using dimensionality reduction and Support Vector Machine) by extracting the most relevant features from the physiological data and by searching for the best parameters which maximize the machine learning algorithms’ classification scores.
Collapse
Affiliation(s)
- Livia Petrescu
- Faculty of Biology, University of Bucharest, 050095 Bucharest, Romania
- Correspondence:
| | - Cătălin Petrescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Ana Oprea
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Oana Mitruț
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Gabriela Moise
- Faculty of Letters and Sciences, Petroleum-Gas University of Ploiesti, 100680 Ploiesti, Romania;
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Florica Moldoveanu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| |
Collapse
|
13
|
Emotion Recognition from ECG Signals Using Wavelet Scattering and Machine Learning. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11114945] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Affect detection combined with a system that dynamically responds to a person’s emotional state allows an improved user experience with computers, systems, and environments and has a wide range of applications, including entertainment and health care. Previous studies on this topic have used a variety of machine learning algorithms and inputs such as audial, visual, or physiological signals. Recently, a lot of interest has been focused on the last, as speech or video recording is impractical for some applications. Therefore, there is a need to create Human–Computer Interface Systems capable of recognizing emotional states from noninvasive and nonintrusive physiological signals. Typically, the recognition task is carried out from electroencephalogram (EEG) signals, obtaining good accuracy. However, EEGs are difficult to register without interfering with daily activities, and recent studies have shown that it is possible to use electrocardiogram (ECG) signals for this purpose. This work improves the performance of emotion recognition from ECG signals using wavelet transform for signal analysis. Features of the ECG signal are extracted from the AMIGOS database using a wavelet scattering algorithm that allows obtaining features of the signal at different time scales, which are then used as inputs for different classifiers to evaluate their performance. The results show that the proposed algorithm for extracting features and classifying the signals obtains an accuracy of 88.8% in the valence dimension, 90.2% in arousal, and 95.3% in a two-dimensional classification, which is better than the performance reported in previous studies. This algorithm is expected to be useful for classifying emotions using wearable devices.
Collapse
|
14
|
Ghosh S, Ekbal A, Bhattacharyya P. A Multitask Framework to Detect Depression, Sentiment and Multi-label Emotion from Suicide Notes. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09828-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
15
|
Abstract
Many scientific studies have been concerned with building an automatic system to recognize emotions, and building such systems usually relies on brain signals. These studies have shown that brain signals can be used to classify many emotional states. This process is considered difficult, especially since the brain’s signals are not stable. Human emotions are generated as a result of reactions to different emotional states, which affect brain signals. Thus, the performance of emotion recognition systems by brain signals depends on the efficiency of the algorithms used to extract features, the feature selection algorithm, and the classification process. Recently, the study of electroencephalography (EEG) signaling has received much attention due to the availability of several standard databases, especially since brain signal recording devices have become available in the market, including wireless ones, at reasonable prices. This work aims to present an automated model for identifying emotions based on EEG signals. The proposed model focuses on creating an effective method that combines the basic stages of EEG signal handling and feature extraction. Different from previous studies, the main contribution of this work relies in using empirical mode decomposition/intrinsic mode functions (EMD/IMF) and variational mode decomposition (VMD) for signal processing purposes. Despite the fact that EMD/IMFs and VMD methods are widely used in biomedical and disease-related studies, they are not commonly utilized in emotion recognition. In other words, the methods used in the signal processing stage in this work are different from the methods used in literature. After the signal processing stage, namely in the feature extraction stage, two well-known technologies were used: entropy and Higuchi’s fractal dimension (HFD). Finally, in the classification stage, four classification methods were used—naïve Bayes, k-nearest neighbor (k-NN), convolutional neural network (CNN), and decision tree (DT)—for classifying emotional states. To evaluate the performance of our proposed model, experiments were applied to a common database called DEAP based on many evaluation models, including accuracy, specificity, and sensitivity. The experiments showed the efficiency of the proposed method; a 95.20% accuracy was achieved using the CNN-based method.
Collapse
|
16
|
Cognitive and Affective Assessment of Navigation and Mobility Tasks for the Visually Impaired via Electroencephalography and Behavioral Signals. SENSORS 2020; 20:s20205821. [PMID: 33076251 PMCID: PMC7602506 DOI: 10.3390/s20205821] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 11/25/2022]
Abstract
This paper presented the assessment of cognitive load (as an effective real-time index of task difficulty) and the level of brain activation during an experiment in which eight visually impaired subjects performed two types of tasks while using the white cane and the Sound of Vision assistive device with three types of sensory input—audio, haptic, and multimodal (audio and haptic simultaneously). The first task was to identify object properties and the second to navigate and avoid obstacles in both the virtual environment and real-world settings. The results showed that the haptic stimuli were less intuitive than the audio ones and that the navigation with the Sound of Vision device increased cognitive load and working memory. Visual cortex asymmetry was lower in the case of multimodal stimulation than in the case of separate stimulation (audio or haptic). There was no correlation between visual cortical activity and the number of collisions during navigation, regardless of the type of navigation or sensory input. The visual cortex was activated when using the device, but only for the late-blind users. For all the subjects, the navigation with the Sound of Vision device induced a low negative valence, in contrast with the white cane navigation.
Collapse
|
17
|
Expressure: Detect Expressions Related to Emotional and Cognitive Activities Using Forehead Textile Pressure Mechanomyography. SENSORS 2020; 20:s20030730. [PMID: 32013009 PMCID: PMC7038450 DOI: 10.3390/s20030730] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 01/22/2020] [Accepted: 01/24/2020] [Indexed: 11/16/2022]
Abstract
We investigate how pressure-sensitive smart textiles, in the form of a headband, can detect changes in facial expressions that are indicative of emotions and cognitive activities. Specifically, we present the Expressure system that performs surface pressure mechanomyography on the forehead using an array of textile pressure sensors that is not dependent on specific placement or attachment to the skin. Our approach is evaluated in systematic psychological experiments. First, through a mimicking expression experiment with 20 participants, we demonstrate the system’s ability to detect well-defined facial expressions. We achieved accuracies of 0.824 to classify among three eyebrow movements (0.333 chance-level) and 0.381 among seven full-face expressions (0.143 chance-level). A second experiment was conducted with 20 participants to induce cognitive loads with N-back tasks. Statistical analysis has shown significant correlations between the Expressure features on a fine time granularity and the cognitive activity. The results have also shown significant correlations between the Expressure features and the N-back score. From the 10 most facially expressive participants, our approach can predict whether the N-back score is above or below the average with 0.767 accuracy.
Collapse
|
18
|
Bălan O, Moise G, Moldoveanu A, Leordeanu M, Moldoveanu F. An Investigation of Various Machine and Deep Learning Techniques Applied in Automatic Fear Level Detection and Acrophobia Virtual Therapy. SENSORS 2020; 20:s20020496. [PMID: 31952289 PMCID: PMC7013944 DOI: 10.3390/s20020496] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 01/07/2020] [Accepted: 01/13/2020] [Indexed: 11/16/2022]
Abstract
In this paper, we investigate various machine learning classifiers used in our Virtual Reality (VR) system for treating acrophobia. The system automatically estimates fear level based on multimodal sensory data and a self-reported emotion assessment. There are two modalities of expressing fear ratings: the 2-choice scale, where 0 represents relaxation and 1 stands for fear; and the 4-choice scale, with the following correspondence: 0—relaxation, 1—low fear, 2—medium fear and 3—high fear. A set of features was extracted from the sensory signals using various metrics that quantify brain (electroencephalogram—EEG) and physiological linear and non-linear dynamics (Heart Rate—HR and Galvanic Skin Response—GSR). The novelty consists in the automatic adaptation of exposure scenario according to the subject’s affective state. We acquired data from acrophobic subjects who had undergone an in vivo pre-therapy exposure session, followed by a Virtual Reality therapy and an in vivo evaluation procedure. Various machine and deep learning classifiers were implemented and tested, with and without feature selection, in both a user-dependent and user-independent fashion. The results showed a very high cross-validation accuracy on the training set and good test accuracies, ranging from 42.5% to 89.5%. The most important features of fear level classification were GSR, HR and the values of the EEG in the beta frequency range. For determining the next exposure scenario, a dominant role was played by the target fear level, a parameter computed by taking into account the patient’s estimated fear level.
Collapse
Affiliation(s)
- Oana Bălan
- Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, Bucharest 060042, Romania; (A.M.); (M.L.); (F.M.)
- Correspondence: ; Tel.: +4072-2276-571
| | - Gabriela Moise
- Department of Computer Science, Information Technology, Mathematics and Physics, Petroleum-Gas University of Ploiesti, Ploiesti 100680, Romania;
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, Bucharest 060042, Romania; (A.M.); (M.L.); (F.M.)
| | - Marius Leordeanu
- Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, Bucharest 060042, Romania; (A.M.); (M.L.); (F.M.)
| | - Florica Moldoveanu
- Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, Bucharest 060042, Romania; (A.M.); (M.L.); (F.M.)
| |
Collapse
|