1
|
Gallese V. Digital visions: the experience of self and others in the age of the digital revolution. Int Rev Psychiatry 2024; 36:656-666. [PMID: 39555840 DOI: 10.1080/09540261.2024.2355281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 05/10/2024] [Indexed: 11/19/2024]
Abstract
The digital technological revolution shifted the balance of world perceptual experience, increasing exposure to digital content, introducing a new quality to our perceptual experiences. Embodied cognition offers an ideal vantage point to study how digital technologies impact on selves and their social relations for at least two reasons: first, because of the bodily performative character of the relations and interactions these new media evoke; second, because similar brain-body mechanisms ground our relations with both the physical world and its digital mediations. A closer look is taken at the possible effects of digitization on social communication, on politics, as well as on the constitution of the self and its world relations, especially in the context of the ever-increasing amount of time spent online, with a focus on digital natives. As we explore the complexities of the digital age, it is imperative to critically examine the role of digital technologies in shaping social life and political discourse. By understanding the interplay between content, emotional context, delivery methods, and shareability within digital media landscapes, we can develop strategies to mitigate the negative effects of misinformation and promote informed decision-making in our increasingly digital world.
Collapse
Affiliation(s)
- Vittorio Gallese
- Department of Medicine & Surgery - Neuroscience Unit, University of Parma, Parma, Italy
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY, USA
| |
Collapse
|
2
|
Hsu CT, Sato W, Yoshikawa S. An investigation of the modulatory effects of empathic and autistic traits on emotional and facial motor responses during live social interactions. PLoS One 2024; 19:e0290765. [PMID: 38194416 PMCID: PMC10775989 DOI: 10.1371/journal.pone.0290765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 08/15/2023] [Indexed: 01/11/2024] Open
Abstract
A close relationship between emotional contagion and spontaneous facial mimicry has been theoretically proposed and is supported by empirical data. Facial expressions are essential in terms of both emotional and motor synchrony. Previous studies have demonstrated that trait emotional empathy enhanced spontaneous facial mimicry, but the relationship between autistic traits and spontaneous mimicry remained controversial. Moreover, previous studies presented faces that were static or videotaped, which may lack the "liveliness" of real-life social interactions. We addressed this limitation by using an image relay system to present live performances and pre-recorded videos of smiling or frowning dynamic facial expressions to 94 healthy female participants. We assessed their subjective experiential valence and arousal ratings to infer the amplitude of emotional contagion. We measured the electromyographic activities of the zygomaticus major and corrugator supercilii muscles to estimate spontaneous facial mimicry. Individual differences measures included trait emotional empathy (empathic concern) and the autism-spectrum quotient. We did not find that live performances enhanced the modulatory effect of trait differences on emotional contagion or spontaneous facial mimicry. However, we found that a high trait empathic concern was associated with stronger emotional contagion and corrugator mimicry. We found no two-way interaction between the autism spectrum quotient and emotional condition, suggesting that autistic traits did not modulate emotional contagion or spontaneous facial mimicry. Our findings imply that previous findings regarding the relationship between emotional empathy and emotional contagion/spontaneous facial mimicry using videos and photos could be generalized to real-life interactions.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto, Japan
| | - Sakiko Yoshikawa
- Institute of Philosophy and Human Values, Kyoto University of the Arts, Kyoto, Kyoto, Japan
| |
Collapse
|
3
|
Hsu CT, Sato W. Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding. SENSORS (BASEL, SWITZERLAND) 2023; 23:9076. [PMID: 38005462 PMCID: PMC10675524 DOI: 10.3390/s23229076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| |
Collapse
|
4
|
Guntinas-Lichius O, Trentzsch V, Mueller N, Heinrich M, Kuttenreich AM, Dobel C, Volk GF, Graßme R, Anders C. High-resolution surface electromyographic activities of facial muscles during the six basic emotional expressions in healthy adults: a prospective observational study. Sci Rep 2023; 13:19214. [PMID: 37932337 PMCID: PMC10628297 DOI: 10.1038/s41598-023-45779-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 10/24/2023] [Indexed: 11/08/2023] Open
Abstract
High-resolution facial surface electromyography (HR-sEMG) is suited to discriminate between different facial movements. Whether HR-sEMG also allows a discrimination among the six basic emotions of facial expression is unclear. 36 healthy participants (53% female, 18-67 years) were included for four sessions. Electromyograms were recorded from both sides of the face using a muscle-position oriented electrode application (Fridlund scheme) and by a landmark-oriented, muscle unrelated symmetrical electrode arrangement (Kuramoto scheme) simultaneously on the face. In each session, participants expressed the six basic emotions in response to standardized facial images expressing the corresponding emotions. This was repeated once on the same day. Both sessions were repeated two weeks later to assess repetition effects. HR-sEMG characteristics showed systematic regional distribution patterns of emotional muscle activation for both schemes with very low interindividual variability. Statistical discrimination between the different HR-sEMG patterns was good for both schemes for most but not all basic emotions (ranging from p > 0.05 to mostly p < 0.001) when using HR-sEMG of the entire face. When using information only from the lower face, the Kuramoto scheme allowed a more reliable discrimination of all six emotions (all p < 0.001). A landmark-oriented HR-sEMG recording allows specific discrimination of facial muscle activity patterns during basic emotional expressions.
Collapse
Affiliation(s)
- Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany.
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany.
- Center for Rare Diseases, Jena University Hospital, Jena, Germany.
| | - Vanessa Trentzsch
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Nadiya Mueller
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Martin Heinrich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Anna-Maria Kuttenreich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Christian Dobel
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
| | - Gerd Fabian Volk
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Roland Graßme
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
- Department of Prevention, Biomechanics, German Social Accident Insurance Institution for the Foodstuffs and Catering Industry, Erfurt, Germany
| | - Christoph Anders
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| |
Collapse
|
5
|
Hsu CT, Sato W, Kochiyama T, Nakai R, Asano K, Abe N, Yoshikawa S. Enhanced Mirror Neuron Network Activity and Effective Connectivity during Live Interaction Among Female Subjects. Neuroimage 2022; 263:119655. [PMID: 36182055 DOI: 10.1016/j.neuroimage.2022.119655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 09/26/2022] [Accepted: 09/27/2022] [Indexed: 11/24/2022] Open
Abstract
Facial expressions are indispensable in daily human communication. Previous neuroimaging studies investigating facial expression processing have presented pre-recorded stimuli and lacked live face-to-face interaction. Our paradigm alternated between presentations of real-time model performance and pre-recorded videos of dynamic facial expressions to participants. Simultaneous functional magnetic resonance imaging (fMRI) and facial electromyography activity recordings, as well as post-scan valence and arousal ratings were acquired from 44 female participants. Live facial expressions enhanced the subjective valence and arousal ratings as well as facial muscular responses. Live performances showed greater engagement of the right posterior superior temporal sulcus (pSTS), right inferior frontal gyrus (IFG), right amygdala and right fusiform gyrus, and modulated the effective connectivity within the right mirror neuron system (IFG, pSTS, and right inferior parietal lobule). A support vector machine algorithm could classify multivoxel activation patterns in brain regions involved in dynamic facial expression processing in the mentalizing networks (anterior and posterior cingulate cortex). These results indicate that live social interaction modulates the activity and connectivity of the right mirror neuron system and enhances spontaneous mimicry, further facilitating emotional contagion.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan..
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan..
| | - Takanori Kochiyama
- Brain Activity Imaging Center, ATR- Promotions, Inc., 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
| | - Ryusuke Nakai
- Institute for the Future of Human Society, Kyoto University, 46 Yoshidashimoadachi-cho, Sakyo-ku, Kyoto, 606-8501 Japan
| | - Kohei Asano
- Institute for the Future of Human Society, Kyoto University, 46 Yoshidashimoadachi-cho, Sakyo-ku, Kyoto, 606-8501 Japan; Department of Children Education, Osaka University of Comprehensive Children Education, 6-chome-4-26 Yuzato, Higashisumiyoshi Ward, Osaka, 546-0013, Japan
| | - Nobuhito Abe
- Institute for the Future of Human Society, Kyoto University, 46 Yoshidashimoadachi-cho, Sakyo-ku, Kyoto, 606-8501 Japan
| | - Sakiko Yoshikawa
- Institute of Philosophy and Human Values, Kyoto University of the Arts, 2-116 Uryuyama Kitashirakawa, Sakyo, Kyoto, Kyoto 606-8271, Japan
| |
Collapse
|
6
|
Gu Y, Zheng C, Todoh M, Zha F. American Sign Language Translation Using Wearable Inertial and Electromyography Sensors for Tracking Hand Movements and Facial Expressions. Front Neurosci 2022; 16:962141. [PMID: 35937881 PMCID: PMC9345758 DOI: 10.3389/fnins.2022.962141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 06/17/2022] [Indexed: 11/29/2022] Open
Abstract
A sign language translation system can break the communication barrier between hearing-impaired people and others. In this paper, a novel American sign language (ASL) translation method based on wearable sensors was proposed. We leveraged inertial sensors to capture signs and surface electromyography (EMG) sensors to detect facial expressions. We applied a convolutional neural network (CNN) to extract features from input signals. Then, long short-term memory (LSTM) and transformer models were exploited to achieve end-to-end translation from input signals to text sentences. We evaluated two models on 40 ASL sentences strictly following the rules of grammar. Word error rate (WER) and sentence error rate (SER) are utilized as the evaluation standard. The LSTM model can translate sentences in the testing dataset with a 7.74% WER and 9.17% SER. The transformer model performs much better by achieving a 4.22% WER and 4.72% SER. The encouraging results indicate that both models are suitable for sign language translation with high accuracy. With complete motion capture sensors and facial expression recognition methods, the sign language translation system has the potential to recognize more sentences.
Collapse
Affiliation(s)
- Yutong Gu
- Graduate School of Engineering, Hokkaido University, Sapporo, Japan
- *Correspondence: Yutong Gu
| | - Chao Zheng
- Wuhan Second Ship Design and Research Institute, China State Shipbuilding Corporation Limited, Wuhan, China
| | - Masahiro Todoh
- Faculty of Engineering, Hokkaido University, Sapporo, Japan
| | - Fusheng Zha
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
7
|
Yang D, Tao H, Ge H, Li Z, Hu Y, Meng J. Altered Processing of Social Emotions in Individuals With Autistic Traits. Front Psychol 2022; 13:746192. [PMID: 35310287 PMCID: PMC8931733 DOI: 10.3389/fpsyg.2022.746192] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 01/28/2022] [Indexed: 11/20/2022] Open
Abstract
Social impairment is a defining phenotypic feature of autism. The present study investigated whether individuals with autistic traits exhibit altered perceptions of social emotions. Two groups of participants (High-AQ and Low-AQ) were recruited based on their scores on the autism-spectrum quotient (AQ). Their behavioral responses and event-related potentials (ERPs) elicited by social and non-social stimuli with positive, negative, and neutral emotional valence were compared in two experiments. In Experiment 1, participants were instructed to view social-emotional and non-social emotional pictures. In Experiment 2, participants were instructed to listen to social-emotional and non-social emotional audio recordings. More negative emotional reactions and smaller amplitudes of late ERP components (the late positive potential in Experiment 1 and the late negative component in Experiment 2) were found in the High-AQ group than in the Low-AQ group in response to the social-negative stimuli. In addition, amplitudes of these late ERP components in both experiments elicited in response to social-negative stimuli were correlated with the AQ scores of the High-AQ group. These results suggest that individuals with autistic traits have altered emotional processing of social-negative emotions.
Collapse
Affiliation(s)
- Di Yang
- Key Laboratory of Applied Psychology, Chongqing Normal University, Chongqing, China.,School of Education, Chongqing Normal University, Chongqing, China.,Key Laboratory of Emotion and Mental Health, Chongqing University of Arts and Sciences, Chongqing, China
| | - Hengheng Tao
- Key Laboratory of Applied Psychology, Chongqing Normal University, Chongqing, China.,School of Education, Chongqing Normal University, Chongqing, China
| | - Hongxin Ge
- Key Laboratory of Applied Psychology, Chongqing Normal University, Chongqing, China.,School of Education, Chongqing Normal University, Chongqing, China
| | - Zuoshan Li
- Key Laboratory of Applied Psychology, Chongqing Normal University, Chongqing, China.,School of Education, Chongqing Normal University, Chongqing, China
| | - Yuanyan Hu
- Key Laboratory of Emotion and Mental Health, Chongqing University of Arts and Sciences, Chongqing, China
| | - Jing Meng
- Key Laboratory of Applied Psychology, Chongqing Normal University, Chongqing, China.,School of Education, Chongqing Normal University, Chongqing, China
| |
Collapse
|
8
|
Sato W, Namba S, Yang D, Nishida S, Ishi C, Minato T. An Android for Emotional Interaction: Spatiotemporal Validation of Its Facial Expressions. Front Psychol 2022; 12:800657. [PMID: 35185697 PMCID: PMC8855677 DOI: 10.3389/fpsyg.2021.800657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Accepted: 12/21/2021] [Indexed: 11/13/2022] Open
Abstract
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids' facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola's emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- Field Science Education and Research Center, Kyoto University, Kyoto, Japan
| | - Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Dongsheng Yang
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Shin’ya Nishida
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Carlos Ishi
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Takashi Minato
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| |
Collapse
|
9
|
Rutkowska JM, Meyer M, Hunnius S. Adults Do Not Distinguish Action Intentions Based on Movement Kinematics Presented in Naturalistic Settings. Brain Sci 2021; 11:brainsci11060821. [PMID: 34205675 PMCID: PMC8234011 DOI: 10.3390/brainsci11060821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 06/11/2021] [Accepted: 06/15/2021] [Indexed: 11/25/2022] Open
Abstract
Predicting others’ actions is an essential part of acting in the social world. Action kinematics have been proposed to be a cue about others’ intentions. It is still an open question as to whether adults can use kinematic information in naturalistic settings when presented as a part of a richer visual scene than previously examined. We investigated adults’ intention perceptions from kinematics using naturalistic stimuli in two experiments. In experiment 1, thirty participants watched grasp-to-drink and grasp-to-place movements and identified the movement intention (to drink or to place), whilst their mouth-opening muscle activity was measured with electromyography (EMG) to examine participants’ motor simulation of the observed actions. We found anecdotal evidence that participants could correctly identify the intentions from the action kinematics, although we found no evidence for increased activation of their mylohyoid muscle during the observation of grasp-to-drink compared to grasp-to-place actions. In pre-registered experiment 2, fifty participants completed the same task online. With the increased statistical power, we found strong evidence that participants were not able to discriminate intentions based on movement kinematics. Together, our findings suggest that the role of action kinematics in intention perception is more complex than previously assumed. Although previous research indicates that under certain circumstances observers can perceive and act upon intention-specific kinematic information, perceptual differences in everyday scenes or the observers’ ability to use kinematic information in more naturalistic scenes seems limited.
Collapse
|