1
|
Monno Y, Nawa NE, Yamagishi N. Duration of mood effects following a Japanese version of the mood induction task. PLoS One 2024; 19:e0293871. [PMID: 38180997 PMCID: PMC10769078 DOI: 10.1371/journal.pone.0293871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 10/23/2023] [Indexed: 01/07/2024] Open
Abstract
Researchers have employed a variety of methodologies to induce positive and negative mood states in study participants to investigate the influence that mood has on psychological, physiological, and cognitive processes both in health and illness. Here, we investigated the effectiveness and the duration of mood effects following the mood induction task (MIT), a protocol that combines mood-inducing sentences, auditory stimuli, and autobiographical memory recall in a cohort of healthy Japanese adult individuals. In Study 1, we translated and augmented the mood-inducing sentences originally proposed by Velten in 1968 and verified that people perceived the translations as being largely congruent with the valence of the original sentences. In Study 2, we developed a Japanese version of the mood induction task (J-MIT) and examined its effectiveness using an online implementation. Results based on data collected immediately after induction showed that the J-MIT was able to modulate the mood in the intended direction. However, mood effects were not observed during the subsequent performance of a cognitive task, the Tower of London task, suggesting that the effects did not persist long enough. Overall, the current results show that mood induction procedures such as the J-MIT can alter the mood of study participants in the short term; however, at the same time, they highlight the need to further examine how mood effects evolve and persist through time to better understand how mood induction protocols can be used to study affective processes more effectively.
Collapse
Affiliation(s)
- Yasunaga Monno
- Research Organization of Open Innovation and Collaboration, Ritsumeikan University, Ibaraki, Osaka, Japan
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
| | - Norberto Eiji Nawa
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Noriko Yamagishi
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
- College of Global Liberal Arts, Ritsumeikan University, Ibaraki, Osaka, Japan
| |
Collapse
|
2
|
Martins THS, Rodrigues RM, Araújo FCO, Cedro ÁM, Bortoloti R, Varella AAB, Huziwara EM. Transfer of functions based on equivalence class formation using musical stimuli. J Exp Anal Behav 2023; 120:394-405. [PMID: 37710382 DOI: 10.1002/jeab.881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 08/05/2023] [Indexed: 09/16/2023]
Abstract
Empirical evidence has supported that musical excerpts written in major and minor modes are responsible for evoking happiness and sadness, respectively. In this study, we evaluated whether the emotional content evoked by musical stimuli would transfer to abstract figures when they became members of the same equivalence class. Participants assigned to the experimental group were submitted to a training procedure to form equivalence classes comprising musical excerpts (A) and meaningless abstract figures (B, C, and D). Afterward, transfer of function was evaluated using a semantic differential. Participants in the control group showed positive semantic differential scores for major mode musical excerpts, negative scores for minor mode musical excerpts, and neutral scores for the B, C, and D stimuli. Participants in the experimental groups showed positive semantic differential scores for visual stimuli equivalent to the major modes and negative semantic differential scores for visual stimuli equivalent to the minor modes. These results indicate transfer of function of emotional content present in musical stimuli through equivalence class formation. These findings could provide a more comprehensive understanding of the effects of using emotional stimuli in equivalence class formation experiments and in transfer of function itself.
Collapse
Affiliation(s)
| | - Raone M Rodrigues
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| | | | - Átila M Cedro
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| | - Renato Bortoloti
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| | - André A B Varella
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
- iABA - Instituto de Análise do Comportamento Aplicada, Brazil
| | - Edson M Huziwara
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| |
Collapse
|
3
|
Merritt SH, Gaffuri K, Zak PJ. Accurately predicting hit songs using neurophysiology and machine learning. Front Artif Intell 2023; 6:1154663. [PMID: 37408542 PMCID: PMC10318137 DOI: 10.3389/frai.2023.1154663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 05/09/2023] [Indexed: 07/07/2023] Open
Abstract
Identifying hit songs is notoriously difficult. Traditionally, song elements have been measured from large databases to identify the lyrical aspects of hits. We took a different methodological approach, measuring neurophysiologic responses to a set of songs provided by a streaming music service that identified hits and flops. We compared several statistical approaches to examine the predictive accuracy of each technique. A linear statistical model using two neural measures identified hits with 69% accuracy. Then, we created a synthetic set data and applied ensemble machine learning to capture inherent non-linearities in neural data. This model classified hit songs with 97% accuracy. Applying machine learning to the neural response to 1st min of songs accurately classified hits 82% of the time showing that the brain rapidly identifies hit music. Our results demonstrate that applying machine learning to neural data can substantially increase classification accuracy for difficult to predict market outcomes.
Collapse
Affiliation(s)
- Sean H. Merritt
- Center for Neuroeconomics Studies, Claremont Graduate University, Claremont, CA, United States
| | - Kevin Gaffuri
- Center for Neuroeconomics Studies, Claremont Graduate University, Claremont, CA, United States
| | - Paul J. Zak
- Center for Neuroeconomics Studies, Claremont Graduate University, Claremont, CA, United States
- Immersion Neuroscience, Henderson, NV, United States
| |
Collapse
|
4
|
Kim AJ. Differential Effects of Musical Expression of Emotions and Psychological Distress on Subjective Appraisals and Emotional Responses to Music. Behav Sci (Basel) 2023; 13:491. [PMID: 37366743 DOI: 10.3390/bs13060491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Revised: 06/07/2023] [Accepted: 06/09/2023] [Indexed: 06/28/2023] Open
Abstract
This study aims to investigate how musical expressions of emotion and individuals' psychological distress impact subjective ratings of emotional response and subjective appraisals, including familiarity, complexity, and preference. A sample of 123 healthy adults participated in an online survey experiment. After listening to four music excerpts with distinct musical expressions of emotional valence and arousal in a randomized sequence. Participants rated subjective emotions of energy, tension, and valence, as well as subjective appraisals, on a visual analogue scale ranging from 0 to 100. The results of repeated measures ANOVA demonstrated significant differences in emotional responses and appraisals across the ratings for different music excerpts (p > 0.01, respectively). The generalized linear mixed model results further revealed a significant main effect of musical valence on all emotional response dimensions of energy (β = -4.73 **), tension (β = 14.31 ***), valence level (β = -18.81 ***), and subjective appraisal in terms of familiarity (β = -23.06 ***), complexity (β = -6.67 ***), and preference (β = -19.54 ***). Musical arousal showed comparable results except for effects on emotional valence ratings. However, significant effects of psychological distress regarding depression, anxiety, and stress scores were only partially observed. Findings suggest that the expression of emotions through music primarily influences emotional responses and subjective appraisals, while the influence of an individual's psychological distress level may be relatively subtle.
Collapse
Affiliation(s)
- Aimee Jeehae Kim
- Department of Musicology and Culture, Music Therapy Major, Graduate School, Dong-A University, Busan 49315, Republic of Korea
| |
Collapse
|
5
|
Herdson O, Eerola T, Javadi AH. Analysis and Classification of Music-Induced States of Sadness. EMOTION REVIEW 2022. [DOI: 10.1177/17540739221140472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The enjoyment and pleasure derived from sad music has sparked fascination among researchers due to its seemingly paradoxical nature in producing positive affect. Research is yet to develop a comprehensive understanding of this “paradox.” Contradictory findings have resulted in a great variability within the literature, meaning results and interpretations can be difficult to derive. Consequently, this review collated the current literature, seeking to utilize the variability in the findings to propose a model of differential sad states, providing a means for past and future findings to be interpreted. The proposed model is based on theoretical understanding, as such it requires full empirical support. Comparisons to alternative models, theoretical, clinical, and cognitive implications, as well as future directions are discussed.
Collapse
Affiliation(s)
- Oliver Herdson
- School of Psychology, University of Kent, Canterbury, UK
| | | | - Amir-Homayoun Javadi
- School of Psychology, University of Kent, Canterbury, UK
- School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
6
|
Ansani A, Marini M, Poggi I, Mallia L. Recognition memory in movie scenes: the soundtrack induces mood-coherent bias, but not through mood induction. JOURNAL OF COGNITIVE PSYCHOLOGY 2022. [DOI: 10.1080/20445911.2022.2116448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Alessandro Ansani
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Cosmic Lab, Department of Philosophy, Communication, and Performing Arts, Roma Tre University, Rome, Italy
| | - Marco Marini
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Institute of Cognitive Sciences and Technologies (ISTC), Rome, Italy
| | - Isabella Poggi
- Cosmic Lab, Department of Philosophy, Communication, and Performing Arts, Roma Tre University, Rome, Italy
| | - Luca Mallia
- Department of Movement, Human and Health Sciences, University of Rome, Foro Italico, Rome, Italy
| |
Collapse
|
7
|
Carvalho M, Cera N, Silva S. The "Ifs" and "Hows" of the Role of Music on the Implementation of Emotional Regulation Strategies. Behav Sci (Basel) 2022; 12:bs12060199. [PMID: 35735409 PMCID: PMC9219814 DOI: 10.3390/bs12060199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 06/02/2022] [Accepted: 06/18/2022] [Indexed: 02/05/2023] Open
Abstract
Music is believed to aid the implementation of emotion regulation strategies like distraction or reappraisal, but empirical studies have shown null results. However, the moderating role of one’s relation with music (musical sophistication) and/or executive functioning skills has not been considered yet. In addition, little is known about how music acts. In the present study, we induced anger in a group of participants characterized by musical sophistication and executive functioning. We asked them to regulate their emotional state and measured regulation efficacy. Participants were split into four groups, defined by regulation strategy (distraction vs. reappraisal) and music (with vs. without). Results indicated music effects in higher, but not in lower musical sophistication participants. In the former, music benefitted from reappraisal but was an impaired distraction. Two different executive functions—working memory and affective flexibility—had opposite effects: higher, but not lower working memory participants benefited from music; lower, but not higher, effective flexibility participants took advantage of music. Reports of subjective experience suggested that music favors more empathic reappraisals, and that these may be more long-lasting. Our findings support the idea that music effects depend on listeners’ characteristics, and they raise new hypotheses concerning the specificity of emotional regulation aided by music.
Collapse
Affiliation(s)
- Mariana Carvalho
- Center for Psychology at University of Porto (CPUP), Faculty of Psychology and Education Sciences, University of Porto, 4200-135 Porto, Portugal; (M.C.); (N.C.)
| | - Nicoletta Cera
- Center for Psychology at University of Porto (CPUP), Faculty of Psychology and Education Sciences, University of Porto, 4200-135 Porto, Portugal; (M.C.); (N.C.)
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), 3000-548 Coimbra, Portugal
| | - Susana Silva
- Center for Psychology at University of Porto (CPUP), Faculty of Psychology and Education Sciences, University of Porto, 4200-135 Porto, Portugal; (M.C.); (N.C.)
- Correspondence:
| |
Collapse
|
8
|
Behnke M, Kreibig SD, Kaczmarek LD, Assink M, Gross JJ. Autonomic Nervous System Activity During Positive Emotions: A Meta-Analytic Review. EMOTION REVIEW 2022. [DOI: 10.1177/17540739211073084] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Autonomic nervous system (ANS) activity is a fundamental component of emotional responding. It is not clear, however, whether positive emotional states are associated with differential ANS reactivity. To address this issue, we conducted a meta-analytic review of 120 articles (686 effect sizes, total N = 6,546), measuring ANS activity during 11 elicited positive emotions, namely amusement, attachment love, awe, contentment, craving, excitement, gratitude, joy, nurturant love, pride, and sexual desire. We identified a widely dispersed collection of studies. Univariate results indicated that positive emotions produce no or weak and highly variable increases in ANS reactivity. However, the limitations of work to date – which we discuss – mean that our conclusions should be treated as empirically grounded hypotheses that future research should validate.
Collapse
Affiliation(s)
- Maciej Behnke
- Faculty of Psychology and Cognitive Science, Adam Mickiewicz University
| | | | | | - Mark Assink
- Research Institute of Child Development and Education, University of Amsterdam
| | | |
Collapse
|
9
|
Sharing Happy Stories Increases Interpersonal Closeness: Interpersonal Brain Synchronization as a Neural Indicator. eNeuro 2021; 8:ENEURO.0245-21.2021. [PMID: 34750155 PMCID: PMC8607910 DOI: 10.1523/eneuro.0245-21.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 09/27/2021] [Accepted: 09/30/2021] [Indexed: 11/21/2022] Open
Abstract
Our lives revolve around sharing emotional stories (i.e., happy and sad stories) with other people. Such emotional communication enhances the similarity of story comprehension and neural across speaker-listener pairs. The theory of Emotions as Social Information Model (EASI) suggests that such emotional communication may influence interpersonal closeness. However, few studies have examined speaker-listener interpersonal brain synchronization (IBS) during emotional communication and whether it is associated with meaningful aspects of the speaker-listener interpersonal relationship. Here, one speaker watched emotional videos and communicated the content of the videos to 32 people as listeners (happy/sad/neutral group). Both speaker and listeners’ neural activities were recorded using EEG. After listening, we assessed the interpersonal closeness between the speaker and listeners. Compared with the sad group, sharing happy stories showed a better recall quality and a higher rating of interpersonal closeness. The happy group showed higher IBS in the frontal cortex and left temporoparietal cortex than the sad group. The relationship between frontal IBS and interpersonal closeness was moderated by sharing happy/sad stories. Exploratory analysis using support vector regression (SVR) showed that the IBS could also predict the ratings of interpersonal closeness. These results suggest that frontal IBS could serve as an indicator of whether sharing emotional stories facilitate interpersonal closeness. These findings improve our understanding of emotional communication among individuals that guides behaviors during interpersonal interactions.
Collapse
|
10
|
Music and Time Perception in Audiovisuals: Arousing Soundtracks Lead to Time Overestimation No Matter Their Emotional Valence. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5110068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
One of the most tangible effects of music is its ability to alter our perception of time. Research on waiting times and time estimation of musical excerpts has attested its veritable effects. Nevertheless, there exist contrasting results regarding several musical features’ influence on time perception. When considering emotional valence and arousal, there is some evidence that positive affect music fosters time underestimation, whereas negative affect music leads to overestimation. Instead, contrasting results exist with regard to arousal. Furthermore, to the best of our knowledge, a systematic investigation has not yet been conducted within the audiovisual domain, wherein music might improve the interaction between the user and the audiovisual media by shaping the recipients’ time perception. Through the current between-subjects online experiment (n = 565), we sought to analyze the influence that four soundtracks (happy, relaxing, sad, scary), differing in valence and arousal, exerted on the time estimation of a short movie, as compared to a no-music condition. The results reveal that (1) the mere presence of music led to time overestimation as opposed to the absence of music, (2) the soundtracks that were perceived as more arousing (i.e., happy and scary) led to time overestimation. The findings are discussed in terms of psychological and phenomenological models of time perception.
Collapse
|
11
|
De Filippi E, Wolter M, Melo BRP, Tierra-Criollo CJ, Bortolini T, Deco G, Moll J. Classification of Complex Emotions Using EEG and Virtual Environment: Proof of Concept and Therapeutic Implication. Front Hum Neurosci 2021; 15:711279. [PMID: 34512297 PMCID: PMC8427812 DOI: 10.3389/fnhum.2021.711279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 07/29/2021] [Indexed: 11/29/2022] Open
Abstract
During the last decades, neurofeedback training for emotional self-regulation has received significant attention from scientific and clinical communities. Most studies have investigated emotions using functional magnetic resonance imaging (fMRI), including the real-time application in neurofeedback training. However, the electroencephalogram (EEG) is a more suitable tool for therapeutic application. Our study aims at establishing a method to classify discrete complex emotions (e.g., tenderness and anguish) elicited through a near-immersive scenario that can be later used for EEG-neurofeedback. EEG-based affective computing studies have mainly focused on emotion classification based on dimensions, commonly using passive elicitation through single-modality stimuli. Here, we integrated both passive and active elicitation methods. We recorded electrophysiological data during emotion-evoking trials, combining emotional self-induction with a multimodal virtual environment. We extracted correlational and time-frequency features, including frontal-alpha asymmetry (FAA), using Complex Morlet Wavelet convolution. Thinking about future real-time applications, we performed within-subject classification using 1-s windows as samples and we applied trial-specific cross-validation. We opted for a traditional machine-learning classifier with low computational complexity and sufficient validation in online settings, the Support Vector Machine. Results of individual-based cross-validation using the whole feature sets showed considerable between-subject variability. The individual accuracies ranged from 59.2 to 92.9% using time-frequency/FAA and 62.4 to 92.4% using correlational features. We found that features of the temporal, occipital, and left-frontal channels were the most discriminative between the two emotions. Our results show that the suggested pipeline is suitable for individual-based classification of discrete emotions, paving the way for future personalized EEG-neurofeedback training.
Collapse
Affiliation(s)
- Eleonora De Filippi
- Computational Neuroscience Group, Center for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Mara Wolter
- Cognitive Neuroscience and Neuroinformatics Unit, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
| | - Bruno R. P. Melo
- Cognitive Neuroscience and Neuroinformatics Unit, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Biomedical Engineering Program, Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Carlos J. Tierra-Criollo
- Biomedical Engineering Program, Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Tiago Bortolini
- Cognitive Neuroscience and Neuroinformatics Unit, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
| | - Gustavo Deco
- Computational Neuroscience Group, Center for Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de la Recerca i Estudis Avançats, Barcelona, Spain
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| | - Jorge Moll
- Cognitive Neuroscience and Neuroinformatics Unit, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Scients Institute, Palo Alto, CA, United States
| |
Collapse
|
12
|
Fuentes-Sánchez N, Pastor R, Escrig MA, Elipe-Miravet M, Pastor MC. Emotion elicitation during music listening: Subjective self-reports, facial expression, and autonomic reactivity. Psychophysiology 2021; 58:e13884. [PMID: 34145586 DOI: 10.1111/psyp.13884] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 05/30/2021] [Accepted: 06/01/2021] [Indexed: 11/30/2022]
Abstract
The use of music as emotional stimuli in experimental studies has grown in recent years. However, prior studies have mainly focused on self-reports and central measures, with a few works exploring the time course of psychophysiological correlates. Moreover, most of the previous research has been carried out either from the dimensional or categorical model but not combining both approaches to emotions. This study aimed to investigate subjective and physiological correlates of emotion elicitation through music, following the three-dimensional and the discrete emotion model. A sample of 50 healthy volunteers (25 women) took part in this experiment by listening to 42 film music excerpts (14 pleasant, 14 unpleasant, 14 neutral) presented during 8 s, while peripheral measures were continuously recorded. After music offset, affective dimensions (valence, energy arousal, and tension arousal) as well as discrete emotions (happiness, sadness, tenderness, fear, and anger) were collected using a 9-point scale. Results showed an effect of the music category on subjective and psychophysiological measures. In peripheral physiology, greater electrodermal activity, heart rate acceleration, and zygomatic responses, besides lower corrugator amplitude, were observed for pleasant excerpts in comparison to neutral and unpleasant music, from 2 s after stimulus onset until the end of its duration. Overall, our results add evidence for the efficacy of standardized film music excerpts to evoke powerful emotions in laboratory settings; thus, opening a path to explore interventions based on music in pathologies with underlying emotion deregulatory processes.
Collapse
Affiliation(s)
- Nieves Fuentes-Sánchez
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Raúl Pastor
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Miguel A Escrig
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Marcel Elipe-Miravet
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - M Carmen Pastor
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| |
Collapse
|
13
|
Kasos K, Kekecs Z, Csirmaz L, Zimonyi S, Vikor F, Kasos E, Veres A, Kotyuk E, Szekely A. Bilateral comparison of traditional and alternate electrodermal measurement sites. Psychophysiology 2020; 57:e13645. [PMID: 32931044 DOI: 10.1111/psyp.13645] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 04/25/2020] [Accepted: 04/29/2020] [Indexed: 11/30/2022]
Abstract
Advances in mobile and wireless technology have expanded the scope of electrodermal research. Since traditional electrodermal measurement sites are not always suitable for laboratory research and are rarely appropriate for ambulatory measurements, there is a need to explore and contrast alternate measurement locations. We evaluated bilateral electrodermal activity (EDA) from five measurement sites (fingers, feet, wrists, shoulders, and calves). In a counterbalanced, randomized, within-subjects design study, participants (N = 115) engaged in a 4-min-long breathing exercise and were exposed to emotionally laden and neutral stimuli. High within-subject correlations were found between the EDA measured from fingers bilaterally (r = .89), between the left fingers and both feet (r = .72). Moderate correlations were found between EDA measured from the left fingers and wrists (r = .30 and r = .33), low correlations between the left fingers and the shoulders (r = -.03 and r = -.06) or calves (r = .05 and r = .14). Response latency was the shortest on the fingers while it was the longest on the lower body. Short response windows would miss some of the responses from the palmar surfaces and a substantial number from other evaluated locations. The fingers and the feet are the most reliable locations to measure from, followed by the wrists. We suggest setting site-specific response windows for different measurement locations. An investigation of repeatability showed that within-subject correlations, response frequencies, response amplitudes show a similar pattern from the first measurement time to a later one.
Collapse
Affiliation(s)
- Krisztian Kasos
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary.,MTA-ELTE Lendület Adaptation Research Group, Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Zoltan Kekecs
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Luca Csirmaz
- MTA-ELTE Lendület Adaptation Research Group, Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Szabolcs Zimonyi
- MTA-ELTE Lendület Adaptation Research Group, Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Fanni Vikor
- MTA-ELTE Lendület Adaptation Research Group, Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Eniko Kasos
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary.,MTA-ELTE Lendület Adaptation Research Group, Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | | | - Eszter Kotyuk
- MTA-ELTE Lendület Adaptation Research Group, Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Anna Szekely
- MTA-ELTE Lendület Adaptation Research Group, Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
14
|
Abstract
To effectively communicate with people, social robots must be capable of detecting, interpreting, and responding to human affect during human–robot interactions (HRIs). In order to accurately detect user affect during HRIs, affect elicitation techniques need to be developed to create and train appropriate affect detection models. In this paper, we present such a novel affect elicitation and detection method for social robots in HRIs. Non-verbal emotional behaviors of the social robot were designed to elicit user affect, which was directly measured through electroencephalography (EEG) signals. HRI experiments with both younger and older adults were conducted to evaluate our affect elicitation technique and compare the two types of affect detection models we developed and trained utilizing multilayer perceptron neural networks (NNs) and support vector machines (SVMs). The results showed that; on average, the self-reported valence and arousal were consistent with the intended elicited affect. Furthermore, it was also noted that the EEG data obtained could be used to train affect detection models with the NN models achieving higher classification rates
Collapse
|