1
|
Heffer N, Dennie E, Ashwin C, Petrini K, Karl A. Multisensory processing of emotional cues predicts intrusive memories after virtual reality trauma. VIRTUAL REALITY 2023; 27:2043-2057. [PMID: 37614716 PMCID: PMC10442266 DOI: 10.1007/s10055-023-00784-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 03/03/2023] [Indexed: 08/25/2023]
Abstract
Research has shown that high trait anxiety can alter multisensory processing of threat cues (by amplifying integration of angry faces and voices); however, it remains unknown whether differences in multisensory processing play a role in the psychological response to trauma. This study examined the relationship between multisensory emotion processing and intrusive memories over seven days following exposure to an analogue trauma in a sample of 55 healthy young adults. We used an adapted version of the trauma film paradigm, where scenes showing a car accident trauma were presented using virtual reality, rather than a conventional 2D film. Multisensory processing was assessed prior to the trauma simulation using a forced choice emotion recognition paradigm with happy, sad and angry voice-only, face-only, audiovisual congruent (face and voice expressed matching emotions) and audiovisual incongruent expressions (face and voice expressed different emotions). We found that increased accuracy in recognising anger (but not happiness and sadness) in the audiovisual condition relative to the voice- and face-only conditions was associated with more intrusions following VR trauma. Despite previous results linking trait anxiety and intrusion development, no significant influence of trait anxiety on intrusion frequency was observed. Enhanced integration of threat-related information (i.e. angry faces and voices) could lead to overly threatening appraisals of stressful life events and result in greater intrusion development after trauma. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-023-00784-1.
Collapse
Affiliation(s)
- Naomi Heffer
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- School of Sciences, Bath Spa University, Bath, UK
| | - Emma Dennie
- Mood Disorders Centre, University of Exeter, Exeter, UK
| | - Chris Ashwin
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- Centre for Applied Autism Research (CAAR), Bath, UK
| | - Karin Petrini
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, UK
| | - Anke Karl
- Mood Disorders Centre, University of Exeter, Exeter, UK
| |
Collapse
|
2
|
Kadiri SR, Alku P. Subjective Evaluation of Basic Emotions from Audio-Visual Data. SENSORS (BASEL, SWITZERLAND) 2022; 22:4931. [PMID: 35808423 PMCID: PMC9269694 DOI: 10.3390/s22134931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 06/20/2022] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
Understanding of the perception of emotions or affective states in humans is important to develop emotion-aware systems that work in realistic scenarios. In this paper, the perception of emotions in naturalistic human interaction (audio-visual data) is studied using perceptual evaluation. For this purpose, a naturalistic audio-visual emotion database collected from TV broadcasts such as soap-operas and movies, called the IIIT-H Audio-Visual Emotion (IIIT-H AVE) database, is used. The database consists of audio-alone, video-alone, and audio-visual data in English. Using data of all three modes, perceptual tests are conducted for four basic emotions (angry, happy, neutral, and sad) based on category labeling and for two dimensions, namely arousal (active or passive) and valence (positive or negative), based on dimensional labeling. The results indicated that the participants' perception of emotions was remarkably different between the audio-alone, video-alone, and audio-video data. This finding emphasizes the importance of emotion-specific features compared to commonly used features in the development of emotion-aware systems.
Collapse
|
3
|
Heffer N, Karl A, Jicol C, Ashwin C, Petrini K. Anxiety biases audiovisual processing of social signals. Behav Brain Res 2021; 410:113346. [PMID: 33964354 DOI: 10.1016/j.bbr.2021.113346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 04/20/2021] [Accepted: 05/03/2021] [Indexed: 02/08/2023]
Abstract
In everyday life, information from multiple senses is integrated for a holistic understanding of emotion. Despite evidence of atypical multisensory perception in populations with socio-emotional difficulties (e.g., autistic individuals), little research to date has examined how anxiety impacts on multisensory emotion perception. Here we examined whether the level of trait anxiety in a sample of 56 healthy adults affected audiovisual processing of emotion for three types of stimuli: dynamic faces and voices, body motion and dialogues of two interacting agents, and circles and tones. Participants judged emotion from four types of displays - audio-only, visual-only, audiovisual congruent (e.g., angry face and angry voice) and audiovisual incongruent (e.g., angry face and happy voice) - as happy or angry, as quickly as possible. In one task, participants based their emotional judgements on information in one modality while ignoring information in the other, and in a second task they based their judgements on their overall impressions of the stimuli. The results showed that the higher trait anxiety group prioritized the processing of angry cues when combining faces and voices that portrayed conflicting emotions. Individuals in this group were also more likely to benefit from combining congruent face and voice cues when recognizing anger. The multisensory effects of anxiety were found to be independent of the effects of autistic traits. The observed effects of trait anxiety on multisensory processing of emotion may serve to maintain anxiety by increasing sensitivity to social-threat and thus contributing to interpersonal difficulties.
Collapse
Affiliation(s)
- Naomi Heffer
- University of Bath, Department of Psychology, United Kingdom.
| | - Anke Karl
- University of Exeter, Mood Disorders Centre, United Kingdom
| | - Crescent Jicol
- University of Bath, Department of Psychology, United Kingdom
| | - Chris Ashwin
- University of Bath, Department of Psychology, United Kingdom
| | - Karin Petrini
- University of Bath, Department of Psychology, United Kingdom
| |
Collapse
|
4
|
Okruszek Ł, Chrustowicz M. Social Perception and Interaction Database-A Novel Tool to Study Social Cognitive Processes With Point-Light Displays. Front Psychiatry 2020; 11:123. [PMID: 32218745 PMCID: PMC7078367 DOI: 10.3389/fpsyt.2020.00123] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 02/12/2020] [Indexed: 01/03/2023] Open
Abstract
Introduction: The ability to detect and interpret social interactions (SI) is one of the crucial skills enabling people to operate in the social world. Multiple lines of evidence converge to indicate the preferential processing of SI when compared to the individual actions of multiple agents, even if the actions were visually degraded to minimalistic point-light displays (PLDs). Here, we present a novel PLD dataset (Social Perception and Interaction Database; SoPID) that may be used for studying multiple levels of social information processing. Methods: During a motion-capture session, two pairs of actors were asked to perform a wide range of 3-second actions, including: (1) neutral, gesture-based communicative interactions (COM); (2) emotional exchanges (Happy/Angry); (3) synchronous interactive physical activity of actors (SYNC); and (4) independent actions of agents, either object-related (ORA) or non-object related (NORA). An interface that allows single/dyadic PLD stimuli to be presented from either the second person (action aimed toward the viewer) or third person (observation of actions presented toward other agents) perspective was implemented on the basis on the recorded actions. Two validation studies (each with 20 healthy individuals) were then performed to establish the recognizability of the SoPID vignettes. Results: The first study showed a ceiling level accuracy for discrimination of communicative vs. individual actions (93% ± 5%) and high accuracy for interpreting specific types of actions (85 ± 4%) from the SoPID. In the second study, a robust effect of scrambling on the recognizability of SoPID stimuli was observed in an independent sample of healthy individuals. Discussion: These results suggest that the SoPID may be effectively used to examine processes associated with communicative interactions and intentions processing. The database can be accessed via the Open Science Framework (https://osf.io/dcht8/).
Collapse
Affiliation(s)
- Łukasz Okruszek
- Social Neuroscience Lab, Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland
| | | |
Collapse
|
5
|
Emotional prosody Stroop effect in Hindi: An event related potential study. PROGRESS IN BRAIN RESEARCH 2019. [PMID: 31196434 DOI: 10.1016/bs.pbr.2019.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Prosody processing is an important aspect of language comprehension. Previous research on emotional word-prosody conflict has shown that participants are worse when emotional prosody and word meaning are incongruent. Studies with event-related potentials have shown a congruency effect in N400 component. There has been no study on emotional processing in Hindi language in the context of conflict between emotional word meaning and prosody. We used happy and angry words spoken using happy and angry prosody. Participants had to identify whether the word had a happy or angry word meaning. The results showed a congruency effect with worse performance in incongruent trials indicating an emotional Stroop effect in Hindi. The ERP results showed that prosody information is detected very early, which can be seen in the N1 component. In addition, there was a congruency effect in N400. The results show that prosody is processed very early and emotional meaning-prosody congruency effect is obtained with Hindi. Further studies would be needed to investigate similarities and differences in cognitive control associated with language processing.
Collapse
|
6
|
Di Mauro M, Toffalini E, Grassi M, Petrini K. Effect of Long-Term Music Training on Emotion Perception From Drumming Improvisation. Front Psychol 2018; 9:2168. [PMID: 30473677 PMCID: PMC6237981 DOI: 10.3389/fpsyg.2018.02168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 10/22/2018] [Indexed: 11/13/2022] Open
Abstract
Long-term music training has been shown to affect different cognitive and perceptual abilities. However, it is less well known whether it can also affect the perception of emotion from music, especially purely rhythmic music. Hence, we asked a group of 16 non-musicians, 16 musicians with no drumming experience, and 16 drummers to judge the level of expressiveness, the valence (positive and negative), and the category of emotion perceived from 96 drumming improvisation clips (audio-only, video-only, and audiovideo) that varied in several music features (e.g., musical genre, tempo, complexity, drummer’s expressiveness, and drummer’s style). Our results show that the level and type of music training influence the perceived expressiveness, valence, and emotion from solo drumming improvisation. Overall, non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance, for example musicians (with and without drumming experience) gave a greater weight to the visual performance than non-musicians when giving their emotional judgments. These findings suggest that besides influencing several cognitive and perceptual abilities, music training also affects how we perceive emotion from music.
Collapse
Affiliation(s)
- Martina Di Mauro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Enrico Toffalini
- Department of General Psychology, University of Padua, Padua, Italy
| | - Massimo Grassi
- Department of General Psychology, University of Padua, Padua, Italy
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
7
|
Courbalay A, Deroche T, Pradon D, Oliveira AM, Amorim MA. Clinical experience changes the combination and the weighting of audio-visual sources of information. Acta Psychol (Amst) 2018; 191:219-227. [PMID: 30336350 DOI: 10.1016/j.actpsy.2018.09.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Revised: 09/19/2018] [Accepted: 09/26/2018] [Indexed: 11/30/2022] Open
Abstract
OBJECTIVE Although audio and visual information constitute relevant channels to communicate pain, it remains unclear to what extent observers combine and weight these sources of information when estimating others' pain. The present study aimed to examine this issue through the theoretical framework of the Information Integration Theory. The combination and weighting processes were addressed in view of familiarity with others' pain. METHOD Twenty-six participants familiar with pain (novice podiatry clinicians) and thirty non-specialists were asked to estimate the level of pain associated with different displayed locomotor behaviors. Audio and visual information (i.e., sound and gait kinematics) were combined across different intensities and implemented in animated human stick figures performing a walking task (from normal to pathological gaits). RESULTS The novice clinicians and non-specialists relied significantly on gaits and sounds to estimate others' pain intensity. The combination of the two types of information obeyed an averaging rule for the majority of the novice clinicians and an additive rule for the non-specialists. The novice clinicians leaned more on gaits in the absence of limping, whereas they depended more on sounds in the presence of limping. The non-specialists relied more on gaits than on sounds. Overall, the novice clinicians attributed greater pain levels than the non-specialists did. CONCLUSION Depending on a person's clinical experience, the combination of audio and visual pain-related behavior can qualitatively change the processes related to the assessment of others' pain. Non-verbal pain-related behaviors as well as the clinical implications are discussed in view of the assessment of others' pain.
Collapse
Affiliation(s)
- Anne Courbalay
- CIAMS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay Cedex, France; CIAMS, Université d'Orléans, 45067 Orléans, France; APCoSS - Institute of Physical Education and Sports Sciences (IFEPSA), UCO, Angers, France.
| | - Thomas Deroche
- CIAMS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay Cedex, France; CIAMS, Université d'Orléans, 45067 Orléans, France.
| | - Didier Pradon
- UMR 1179 END-ICAP (INSERM-UVSQ), Hôpital Universitaire Raymond Poincaré, APHP, Garches, France.
| | - Armando M Oliveira
- Institute of Cognitive Psychology, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| | - Michel-Ange Amorim
- CIAMS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay Cedex, France; CIAMS, Université d'Orléans, 45067 Orléans, France.
| |
Collapse
|
8
|
Okruszek Ł. It Is Not Just in Faces! Processing of Emotion and Intention from Biological Motion in Psychiatric Disorders. Front Hum Neurosci 2018; 12:48. [PMID: 29472852 PMCID: PMC5809469 DOI: 10.3389/fnhum.2018.00048] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2017] [Accepted: 01/26/2018] [Indexed: 01/29/2023] Open
Abstract
Social neuroscience offers a wide range of techniques that may be applied to study the social cognitive deficits that may underlie reduced social functioning—a common feature across many psychiatric disorders. At the same time, a significant proportion of research in this area has been conducted using paradigms that utilize static displays of faces or eyes. The use of point-light displays (PLDs) offers a viable alternative for studying recognition of emotion or intention inference while minimizing the amount of information presented to participants. This mini-review aims to summarize studies that have used PLD to study emotion and intention processing in schizophrenia (SCZ), affective disorders, anxiety and personality disorders, eating disorders and neurodegenerative disorders. Two main conclusions can be drawn from the reviewed studies: first, the social cognitive problems found in most of the psychiatric samples using PLD were of smaller magnitude than those found in studies presenting social information using faces or voices. Second, even though the information presented in PLDs is extremely limited, presentation of these types of stimuli is sufficient to elicit the disorder-specific, social cognitive biases (e.g., mood-congruent bias in depression, increased threat perception in anxious individuals, aberrant body size perception in eating disorders) documented using other methodologies. Taken together, these findings suggest that point-light stimuli may be a useful method of studying social information processing in psychiatry. At the same time, some limitations of using this methodology are also outlined.
Collapse
Affiliation(s)
- Łukasz Okruszek
- Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
9
|
Thye MD, Bednarz HM, Herringshaw AJ, Sartin EB, Kana RK. The impact of atypical sensory processing on social impairments in autism spectrum disorder. Dev Cogn Neurosci 2018; 29:151-167. [PMID: 28545994 PMCID: PMC6987885 DOI: 10.1016/j.dcn.2017.04.010] [Citation(s) in RCA: 230] [Impact Index Per Article: 38.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Revised: 02/25/2017] [Accepted: 04/18/2017] [Indexed: 02/03/2023] Open
Abstract
Altered sensory processing has been an important feature of the clinical descriptions of autism spectrum disorder (ASD). There is evidence that sensory dysregulation arises early in the progression of ASD and impacts social functioning. This paper reviews behavioral and neurobiological evidence that describes how sensory deficits across multiple modalities (vision, hearing, touch, olfaction, gustation, and multisensory integration) could impact social functions in ASD. Theoretical models of ASD and their implications for the relationship between sensory and social functioning are discussed. Furthermore, neural differences in anatomy, function, and connectivity of different regions underlying sensory and social processing are also discussed. We conclude that there are multiple mechanisms through which early sensory dysregulation in ASD could cascade into social deficits across development. Future research is needed to clarify these mechanisms, and specific focus should be given to distinguish between deficits in primary sensory processing and altered top-down attentional and cognitive processes.
Collapse
Affiliation(s)
- Melissa D Thye
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL 35233, United States
| | - Haley M Bednarz
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL 35233, United States
| | - Abbey J Herringshaw
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL 35233, United States
| | - Emma B Sartin
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL 35233, United States
| | - Rajesh K Kana
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL 35233, United States.
| |
Collapse
|
10
|
Piwek L, Petrini K, Pollick F. A dyadic stimulus set of audiovisual affective displays for the study of multisensory, emotional, social interactions. Behav Res Methods 2016; 48:1285-1295. [PMID: 26542970 PMCID: PMC5101291 DOI: 10.3758/s13428-015-0654-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set was evaluated in a between-design experiment, and was found to be suitable for a broad potential application in the cognitive and neuroscientific study of biological motion and voice, perception of social interactions and multisensory integration. We also detail in this paper a number of supplementary materials, comprising AVI movie files for each interaction, along with text files specifying the three dimensional coordinates of each point-light in each frame of the movie, as well as unprocessed AIFF audio files for each dialogue captured. The full set of stimuli is available to download from: http://motioninsocial.com/stimuli_set/ .
Collapse
Affiliation(s)
- Lukasz Piwek
- Centre for the Study of Behaviour Change and Influence, University of the West of England, 4D17, Coldharbour Lane, BS16 1QY Bristol, UK
| | - Karin Petrini
- Department of Psychology, University of Bath, Claverton Down, BA2 7AY Bath, UK
| | - Frank Pollick
- School of Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB Glasgow, UK
| |
Collapse
|
11
|
Abstract
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Collapse
Affiliation(s)
- Teija Waaramaa
- a Tampere Research Centre for Journalism, Media and Communication (COMET), School of Communication, Media and Theatre (CMT), University of Tampere , Tampere , Finland
| |
Collapse
|