1
|
Bress KS, Cascio CJ. Sensorimotor regulation of facial expression - An untouched frontier. Neurosci Biobehav Rev 2024; 162:105684. [PMID: 38710425 DOI: 10.1016/j.neubiorev.2024.105684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/08/2024]
Abstract
Facial expression is a critical form of nonverbal social communication which promotes emotional exchange and affiliation among humans. Facial expressions are generated via precise contraction of the facial muscles, guided by sensory feedback. While the neural pathways underlying facial motor control are well characterized in humans and primates, it remains unknown how tactile and proprioceptive information reaches these pathways to guide facial muscle contraction. Thus, despite the importance of facial expressions for social functioning, little is known about how they are generated as a unique sensorimotor behavior. In this review, we highlight current knowledge about sensory feedback from the face and how it is distinct from other body regions. We describe connectivity between the facial sensory and motor brain systems, and call attention to the other brain systems which influence facial expression behavior, including vision, gustation, emotion, and interoception. Finally, we petition for more research on the sensory basis of facial expressions, asserting that incomplete understanding of sensorimotor mechanisms is a barrier to addressing atypical facial expressivity in clinical populations.
Collapse
Affiliation(s)
- Kimberly S Bress
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Carissa J Cascio
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
2
|
Martineau S, Perrin L, Kerleau H, Rahal A, Marcotte K. Comparison of Objective Facial Metrics on Both Sides of the Face Among Patients with Severe Bell's Palsy Treated with Mirror Effect Plus Protocol Rehabilitation Versus Controls. Facial Plast Surg Aesthet Med 2024; 26:172-179. [PMID: 37819748 DOI: 10.1089/fpsam.2023.0087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023] Open
Abstract
Objective: The extent to which the healthy hemiface dynamically contributes to facial synchronization during facial rehabilitation has been largely unstudied. This study compares the synchronization of both hemifaces in severe Bell's palsy patients who either received facial rehabilitation called "Mirror Effect Plus Protocol" (MEPP) or basic counseling. Methods: Baseline and 1-year postonset data from 39 patients (19 = MEPP and 20 = basic counseling) were retrospectively analyzed using Emotrics+, a software that generates facial metrics with artificial intelligence (AI) algorithms. Paired t-tests were used for intrasubject comparisons of hemifaces, and mixed model analysis were used to compare between groups. Results: For voluntary movements, a significant difference in favor of the MEPP group was only found for smiling (p = 0.025*). However, at 1-year postonset, the control group showed significant variability between hemifaces for most synkinesis measurements [nasolabial fold (p = 0.029*); eye area (p = 0.043*); palpebral fissure (p = 0.011*)]. Conclusion: In this study, a better synchronization of both hemifaces was found in the MEPP group. Interestingly, motor adaptation in movement amplitude of the healthy hemiface seemed to contribute to this synchronization in MEPP patients. Further studies are needed to standardize the procedure of AI measurements and to adapt it for clinical use.
Collapse
Affiliation(s)
- Sarah Martineau
- Département de chirurgie et Direction des Services Multidisciplinaires, Hôpital Maisonneuve-Rosemont, Montréal, Québec, Canada
- Centre de recherche du Centre intégré universitaire de santé et services sociaux du Nord-de-l'île-de-Montréal, Hôpital du Sacré-Cœur de Montréal, Montréal, Québec, Canada
- École d'Orthophonie et d'Audiologie, Faculté de Médecine, Université de Montréal, Montréal, Québec, Canada
| | - Lucie Perrin
- Département universitaire d'enseignement et de formation en orthophonie, Faculté de Médecine, Université de Sorbonne, Paris, France
| | - Hélène Kerleau
- Département universitaire d'enseignement et de formation en orthophonie, Faculté de Médecine, Université de Sorbonne, Paris, France
| | - Akram Rahal
- Département de chirurgie et Direction des Services Multidisciplinaires, Hôpital Maisonneuve-Rosemont, Montréal, Québec, Canada
- École d'Orthophonie et d'Audiologie, Faculté de Médecine, Université de Montréal, Montréal, Québec, Canada
| | - Karine Marcotte
- Centre de recherche du Centre intégré universitaire de santé et services sociaux du Nord-de-l'île-de-Montréal, Hôpital du Sacré-Cœur de Montréal, Montréal, Québec, Canada
- École d'Orthophonie et d'Audiologie, Faculté de Médecine, Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
3
|
Ross ED. Affective Prosody and Its Impact on the Neurology of Language, Depression, Memory and Emotions. Brain Sci 2023; 13:1572. [PMID: 38002532 PMCID: PMC10669595 DOI: 10.3390/brainsci13111572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 10/25/2023] [Accepted: 11/01/2023] [Indexed: 11/26/2023] Open
Abstract
Based on the seminal publications of Paul Broca and Carl Wernicke who established that aphasic syndromes (disorders of the verbal-linguistic aspects of communication) were predominantly the result of focal left-hemisphere lesions, "language" is traditionally viewed as a lateralized function of the left hemisphere. This, in turn, has diminished and delayed the acceptance that the right hemisphere also has a vital role in language, specifically in modulating affective prosody, which is essential for communication competency and psychosocial well-being. Focal lesions of the right hemisphere may result in disorders of affective prosody (aprosodic syndromes) that are functionally and anatomically analogous to the aphasic syndromes that occur following focal left-hemisphere lesions. This paper will review the deductive research published over the last four decades that has elucidated the neurology of affective prosody which, in turn, has led to a more complete and nuanced understanding of the neurology of language, depression, emotions and memory. In addition, the paper will also present the serendipitous clinical observations (inductive research) and fortuitous inter-disciplinary collaborations that were crucial in guiding and developing the deductive research processes that culminated in the concept that primary emotions and related display behaviors are a lateralized function of the right hemisphere and social emotions, and related display behaviors are a lateralized function of the left hemisphere.
Collapse
Affiliation(s)
- Elliott D. Ross
- Department of Neurology, University of Oklahoma Health Science Center, Oklahoma City, OK 73104, USA; or
- Department of Neurology, University of Colorado School of Medicine, Aurora, CO 80045, USA
| |
Collapse
|
4
|
Straulino E, Scarpazza C, Spoto A, Betti S, Chozas Barrientos B, Sartori L. The Spatiotemporal Dynamics of Facial Movements Reveals the Left Side of a Posed Smile. BIOLOGY 2023; 12:1160. [PMID: 37759560 PMCID: PMC10525663 DOI: 10.3390/biology12091160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/14/2023] [Accepted: 08/19/2023] [Indexed: 09/29/2023]
Abstract
Humans can recombine thousands of different facial expressions. This variability is due to the ability to voluntarily or involuntarily modulate emotional expressions, which, in turn, depends on the existence of two anatomically separate pathways. The Voluntary (VP) and Involuntary (IP) pathways mediate the production of posed and spontaneous facial expressions, respectively, and might also affect the left and right sides of the face differently. This is a neglected aspect in the literature on emotion, where posed expressions instead of genuine expressions are often used as stimuli. Two experiments with different induction methods were specifically designed to investigate the unfolding of spontaneous and posed facial expressions of happiness along the facial vertical axis (left, right) with a high-definition 3-D optoelectronic system. The results showed that spontaneous expressions were distinguished from posed facial movements as revealed by reliable spatial and speed key kinematic patterns in both experiments. Moreover, VP activation produced a lateralization effect: compared with the felt smile, the posed smile involved an initial acceleration of the left corner of the mouth, while an early deceleration of the right corner occurred in the second phase of the movement, after the velocity peak.
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
- Translational Neuroimaging and Cognitive Lab, IRCCS San Camillo Hospital, Via Alberoni 70, 30126 Venice, Italy
| | - Andrea Spoto
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
| | - Sonia Betti
- Department of Psychology, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Rasi e Spinelli 176, 47521 Cesena, Italy;
| | - Beatriz Chozas Barrientos
- Department of Chiropractic Medicine, University of Zurich, Balgrist University Hospital, Forchstrasse 340, 8008 Zürich, Switzerland;
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
- Padova Neuroscience Center, University of Padova, Via Giuseppe Orus 2, 35131 Padova, Italy
| |
Collapse
|
5
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
6
|
Li R, Liu D, Li Z, Liu J, Zhou J, Liu W, Liu B, Fu W, Alhassan AB. A novel EEG decoding method for a facial-expression-based BCI system using the combined convolutional neural network and genetic algorithm. Front Neurosci 2022; 16:988535. [PMID: 36177358 PMCID: PMC9513431 DOI: 10.3389/fnins.2022.988535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 08/17/2022] [Indexed: 11/19/2022] Open
Abstract
Multiple types of brain-control systems have been applied in the field of rehabilitation. As an alternative scheme for balancing user fatigue and the classification accuracy of brain–computer interface (BCI) systems, facial-expression-based brain control technologies have been proposed in the form of novel BCI systems. Unfortunately, existing machine learning algorithms fail to identify the most relevant features of electroencephalogram signals, which further limits the performance of the classifiers. To address this problem, an improved classification method is proposed for facial-expression-based BCI (FE-BCI) systems, using a convolutional neural network (CNN) combined with a genetic algorithm (GA). The CNN was applied to extract features and classify them. The GA was used for hyperparameter selection to extract the most relevant parameters for classification. To validate the superiority of the proposed algorithm used in this study, various experimental performance results were systematically evaluated, and a trained CNN-GA model was constructed to control an intelligent car in real time. The average accuracy across all subjects was 89.21 ± 3.79%, and the highest accuracy was 97.71 ± 2.07%. The superior performance of the proposed algorithm was demonstrated through offline and online experiments. The experimental results demonstrate that our improved FE-BCI system outperforms the traditional methods.
Collapse
Affiliation(s)
- Rui Li
- School of Mechanical and Instrumental Engineering, Xi'an University of Technology, Xi'an, China
- Xi'an People's Hospital, Xi'an, China
- *Correspondence: Rui Li
| | - Di Liu
- School of Mechanical and Instrumental Engineering, Xi'an University of Technology, Xi'an, China
| | - Zhijun Li
- School of Mechanical and Instrumental Engineering, Xi'an University of Technology, Xi'an, China
| | - Jinli Liu
- School of Mechanical and Instrumental Engineering, Xi'an University of Technology, Xi'an, China
| | - Jincao Zhou
- School of Mechanical and Instrumental Engineering, Xi'an University of Technology, Xi'an, China
| | - Weiping Liu
- Xi'an People's Hospital, Xi'an, China
- Weiping Liu
| | - Bo Liu
- School of Mechanical and Instrumental Engineering, Xi'an University of Technology, Xi'an, China
| | - Weiping Fu
- School of Mechanical and Instrumental Engineering, Xi'an University of Technology, Xi'an, China
| | - Ahmad Bala Alhassan
- Department of Electrical and Information Technology, King Mongkut's University of Technology, Bangkok, Thailand
| |
Collapse
|
7
|
Pressman PS, Chen KH, Casey J, Sillau S, Chial HJ, Filley CM, Miller BL, Levenson RW. Incongruences Between Facial Expression and Self-Reported Emotional Reactivity in Frontotemporal Dementia and Related Disorders. J Neuropsychiatry Clin Neurosci 2022; 35:192-201. [PMID: 35989572 PMCID: PMC10723939 DOI: 10.1176/appi.neuropsych.21070186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
OBJECTIVE Emotional reactivity normally involves a synchronized coordination of subjective experience and facial expression. These aspects of emotional reactivity can be uncoupled by neurological illness and produce adverse consequences for patient and caregiver quality of life because of misunderstandings regarding the patient's presumed internal state. Frontotemporal dementia (FTD) is often associated with altered social and emotional functioning. FTD is a heterogeneous disease, and socioemotional changes in patients could result from altered internal experience, altered facial expressive ability, altered language skills, or other factors. The authors investigated how individuals with FTD subtypes differ from a healthy control group regarding the extent to which their facial expressivity aligns with their self-reported emotional experience. METHODS Using a compound measure of emotional reactivity to assess reactions to three emotionally provocative videos, the authors explored potential explanations for differences in alignment of facial expressivity with emotional experience, including parkinsonism, physiological reactivity, and nontarget verbal responses. RESULTS Participants with the three main subtypes of FTD all tended to express less emotion on their faces than they did through self-report. CONCLUSIONS Exploratory analyses suggest that reasons for this incongruence likely differ not only between but also within diagnostic subgroups.
Collapse
Affiliation(s)
- Peter S Pressman
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| | - Kuan Hua Chen
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| | - James Casey
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| | - Stefan Sillau
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| | - Heidi J Chial
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| | - Christopher M Filley
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| | - Bruce L Miller
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| | - Robert W Levenson
- Department of Neurology Behavioral Neurology Section (Pressman, Filley), Alzheimer's and Cognition Center (Pressman, Sillau, Chial), Linda Crnic Institute for Down Syndrome (Chial), and Marcus Institute for Brain Health (Filley), University of Colorado Anschutz Medical Campus, Aurora; Berkeley Psychophysiology Laboratory, University of California, Berkeley (Chen, Casey, Levenson); Memory and Aging Center, University of California, San Francisco (Miller)
| |
Collapse
|
8
|
Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing. Neurosci Biobehav Rev 2021; 132:304-323. [PMID: 34861296 DOI: 10.1016/j.neubiorev.2021.11.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 11/24/2021] [Accepted: 11/24/2021] [Indexed: 11/21/2022]
Abstract
This review summarizes human perception and processing of face and gaze signals. Face and gaze signals are important means of non-verbal social communication. The review highlights that: (1) some evidence is available suggesting that the perception and processing of facial information starts in the prenatal period; (2) the perception and processing of face identity, expression and gaze direction is highly context specific, the effect of race and culture being a case in point. Culture affects by means of experiential shaping and social categorization the way in which information on face and gaze is collected and perceived; (3) face and gaze processing occurs in the so-called 'social brain'. Accumulating evidence suggests that the processing of facial identity, facial emotional expression and gaze involves two parallel and interacting pathways: a fast and crude subcortical route and a slower cortical pathway. The flow of information is bi-directional and includes bottom-up and top-down processing. The cortical networks particularly include the fusiform gyrus, superior temporal sulcus (STS), intraparietal sulcus, temporoparietal junction and medial prefrontal cortex.
Collapse
|
9
|
Yamamoto K, Kimura M, Osaka M. Sorry, Not Sorry: Effects of Different Types of Apologies and Self-Monitoring on Non-verbal Behaviors. Front Psychol 2021; 12:689615. [PMID: 34512447 PMCID: PMC8428520 DOI: 10.3389/fpsyg.2021.689615] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 08/03/2021] [Indexed: 11/25/2022] Open
Abstract
This study examines the effects of different types of apologies and individual differences in self-monitoring on non-verbal apology behaviors involving a server apologizing to a customer. Apologies divide into sincere apologies that reflect genuine recognition of fault, and instrumental apologies, made for achieving a personal goal such as avoiding punishment or rejection by others. Self-monitoring (public-performing and other-directedness) were also examined. Fifty-three female undergraduate students participated in the experiment. Participants were assigned randomly to either a sincere apology condition or an instrumental apology condition. They watched the film clip of the communication between a customer and server and then role-played how they would apologize if they were the server. Participants’ non-verbal behavior during the role-play was videotaped. The results showed an interaction between the apology condition and self-monitoring on non-verbal behaviors. When public-performing was low, gaze avoidance was more likely to occur with a sincere apology than an instrumental apology. There was no difference when the public-performing was high. Facial displays of apology were apparent in the instrumental apology compared to the sincere apology. This tendency became more conspicuous with increased public-performing. Our results indicated that the higher the public-performing, the more participants tried to convey the feeling of apology by combining a direct gaze and facial displays in an instrumental apology. On the other hand, results suggest that lower levels of public-performing elicited less immediacy in offering a sincere apology. Further studies are needed to determine whether these results apply to other conflict resolution situations.
Collapse
Affiliation(s)
- Kyoko Yamamoto
- Department of Psychology, Kobe Gakuin University, Kobe, Japan
| | - Masanori Kimura
- Department of Psychological and Behavioral Sciences, Kobe College, Nishinomiya, Japan
| | - Miki Osaka
- Department of Psychological and Behavioral Sciences, Kobe College, Nishinomiya, Japan
| |
Collapse
|
10
|
Ross ED. Differential Hemispheric Lateralization of Emotions and Related Display Behaviors: Emotion-Type Hypothesis. Brain Sci 2021; 11:1034. [PMID: 34439653 PMCID: PMC8393469 DOI: 10.3390/brainsci11081034] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/14/2021] [Accepted: 07/26/2021] [Indexed: 11/26/2022] Open
Abstract
There are two well-known hypotheses regarding hemispheric lateralization of emotions. The Right Hemisphere Hypothesis (RHH) postulates that emotions and associated display behaviors are a dominant and lateralized function of the right hemisphere. The Valence Hypothesis (VH) posits that negative emotions and related display behaviors are modulated by the right hemisphere and positive emotions and related display behaviors are modulated by the left hemisphere. Although both the RHH and VH are supported by extensive research data, they are mutually exclusive, suggesting that there may be a missing factor in play that may provide a more accurate description of how emotions are lateralization in the brain. Evidence will be presented that provides a much broader perspective of emotions by embracing the concept that emotions can be classified into primary and social types and that hemispheric lateralization is better explained by the Emotion-type Hypothesis (ETH). The ETH posits that primary emotions and related display behaviors are modulated by the right hemisphere and social emotions and related display behaviors are modulated by the left hemisphere.
Collapse
Affiliation(s)
- Elliott D. Ross
- Department of Neurology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA; or
- Department of Neurology, University of Colorado School of Medicine, Aurora, CO 80045, USA
| |
Collapse
|
11
|
Daeschler SC, Zuker R, Borschel GH. Strategies to Improve Cross-Face Nerve Grafting in Facial Paralysis. Facial Plast Surg Clin North Am 2021; 29:423-430. [PMID: 34217445 DOI: 10.1016/j.fsc.2021.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Cross-face nerve grafting enables the reanimation of the contralateral hemiface in unilateral facial palsy and may recover a spontaneous smile. This chapter discusses various clinically applicable strategies to increase the chances for good functional outcomes by maintaining the viability of the neural pathway and target muscle, increasing the number of reinnervating nerve fibers and selecting functionally compatible donor nerve branches. Adopting those strategies may help to further improve patient outcomes in facial reanimation surgery.
Collapse
Affiliation(s)
- Simeon C Daeschler
- Neuroscience and Mental Health Program, Hospital for Sick Children (SickKids), Toronto, Ontario, Canada
| | - Ronald Zuker
- Division of Plastic and Reconstructive Surgery, Hospital for Sick Children (SickKids), University of Toronto, Toronto, Ontario, Canada
| | - Gregory H Borschel
- Division of Plastic and Reconstructive Surgery, Hospital for Sick Children (SickKids), University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
12
|
Zhang X, Li R, Li H, Lu Z, Hu Y, Alhassan AB. Novel approach for electromyography-controlled prostheses based on facial action. Med Biol Eng Comput 2020; 58:2685-2698. [PMID: 32862364 PMCID: PMC7557511 DOI: 10.1007/s11517-020-02236-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 07/23/2020] [Indexed: 01/25/2023]
Abstract
Individuals with severe tetraplegia frequently require to control their complex assistive devices using body movement with the remaining activity above the neck. Electromyography (EMG) signals from the contractions of facial muscles enable people to produce multiple command signals by conveying information about attempted movements. In this study, a novel EMG-controlled system based on facial actions was developed. The mechanism of different facial actions was processed using an EMG control model. Four asymmetric and symmetry actions were defined to control a two-degree-of-freedom (2-DOF) prosthesis. Both indoor and outdoor experiments were conducted to validate the feasibility of EMG-controlled prostheses based on facial action. The experimental results indicated that the new paradigm presented in this paper yields high performance and efficient control for prosthesis applications. Graphical abstract Individuals with severe tetraplegia frequently require to control their complex assistive devices using body movement with the remaining activity above the neck. Electromyography (EMG) signals from the contractions of facial muscles enable people to produce multiple command signals by conveying information about attempted movements. In this study, a novel EMG-controlled system based on facial actions was developed. The mechanism of different facial actions was processed using an EMG control model. Four asymmetric and symmetry actions were defined to control a two-degree-of-freedom (2-DOF) prosthesis. Both indoor and outdoor experiments were conducted to validate the feasibility of EMG-controlled prostheses based on facial action. The experimental results indicated that the new paradigm presented in this paper yields high performance and efficient control for prosthesis applications.
Collapse
Affiliation(s)
- Xiaodong Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China.,Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Rui Li
- School of Mechanical and Precision Instrument Engineering, Xi'an University of Technology, Xi'an, China.
| | - Hanzhe Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China.,Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Zhufeng Lu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China.,Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Yong Hu
- Department of Orthopaedics & Traumatology, The University of Hong Kong, Hong Kong, China
| | - Ahmad Bala Alhassan
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China.,Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
13
|
Park S, Lee K, Lim JA, Ko H, Kim T, Lee JI, Kim H, Han SJ, Kim JS, Park S, Lee JY, Lee EC. Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1199. [PMID: 32098261 PMCID: PMC7070510 DOI: 10.3390/s20041199] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/20/2020] [Accepted: 02/20/2020] [Indexed: 12/05/2022]
Abstract
Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region.
Collapse
Affiliation(s)
- Seho Park
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Korea; (S.P.); (H.K.)
- Dental Research Institute, Seoul National University, School of Dentistry, Seoul 08826, Korea
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Kunyoung Lee
- Department of Computer Science, Sangmyung University, Seoul 03016, Korea;
| | - Jae-A Lim
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Hyunwoong Ko
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Korea; (S.P.); (H.K.)
- Dental Research Institute, Seoul National University, School of Dentistry, Seoul 08826, Korea
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Taehoon Kim
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Jung-In Lee
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Hakrim Kim
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Seong-Jae Han
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Jeong-Shim Kim
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Soowon Park
- Department of Education, Sejong University, Seoul 05006, Korea;
| | - Jun-Young Lee
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Eui Chul Lee
- Department of Human Centered Artificial Intelligence, Sangmyung University, Seoul 03016, Korea
| |
Collapse
|
14
|
Detecting Happiness Using Hyperspectral Imaging Technology. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:1965789. [PMID: 30766598 PMCID: PMC6350538 DOI: 10.1155/2019/1965789] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 11/22/2018] [Accepted: 12/03/2018] [Indexed: 11/17/2022]
Abstract
Hyperspectral imaging (HSI) technology can be used to detect human emotions based on the power of material discrimination from their faces. In this paper, HSI is used to remotely sense and distinguish blood chromophores in facial tissues and acquire an evaluation indicator (tissue oxygen saturation, StO2) using an optical absorption model. This study explored facial analysis while people were showing spontaneous expressions of happiness during social interaction. Happiness, as a psychological emotion, has been shown to be strongly linked to other activities such as physiological reaction and facial expression. Moreover, facial expression as a communicative motor behavior likely arises from musculoskeletal anatomy, neuromuscular activity, and individual personality. This paper quantified the neuromotor movements of tissues surrounding some regions of interest (ROIs) on smiling happily. Next, we selected six regions—the forehead, eye, nose, cheek, mouth, and chin—according to a facial action coding system (FACS). Nineteen segments were subsequently partitioned from the above ROIs. The affective data (StO2) of 23 young adults were acquired by HSI while the participants expressed emotions (calm or happy), and these were used to compare the significant differences in the variations of StO2 between the different ROIs through repeated measures analysis of variance. Results demonstrate that happiness causes different distributions in the variations of StO2 for the above ROIs; these are explained in depth in the article. This study establishes that facial tissue oxygen saturation is a valid and reliable physiological indicator of happiness and merits further research.
Collapse
|
15
|
Li R, Zhang X, Lu Z, Liu C, Li H, Sheng W, Odekhe R. An Approach for Brain-Controlled Prostheses Based on a Facial Expression Paradigm. Front Neurosci 2018; 12:943. [PMID: 30618572 PMCID: PMC6305548 DOI: 10.3389/fnins.2018.00943] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Accepted: 11/29/2018] [Indexed: 12/26/2022] Open
Abstract
One of the most exciting areas of rehabilitation research is brain-controlled prostheses, which translate electroencephalography (EEG) signals into control commands that operate prostheses. However, the existing brain-control methods have an obstacle between the selection of brain computer interface (BCI) and its performance. In this paper, a novel BCI system based on a facial expression paradigm is proposed to control prostheses that uses the characteristics of theta and alpha rhythms of the prefrontal and motor cortices. A portable brain-controlled prosthesis system was constructed to validate the feasibility of the facial-expression-based BCI (FE-BCI) system. Four types of facial expressions were used in this study. An effective filtering algorithm based on noise-assisted multivariate empirical mode decomposition (NA-MEMD) and sample entropy (SampEn) was used to remove electromyography (EMG) artifacts. A wavelet transform (WT) was applied to calculate the feature set, and a back propagation neural network (BPNN) was employed as a classifier. To prove the effectiveness of the FE-BCI system for prosthesis control, 18 subjects were involved in both offline and online experiments. The grand average accuracy over 18 subjects was 81.31 ± 5.82% during the online experiment. The experimental results indicated that the proposed FE-BCI system achieved good performance and can be efficiently applied for prosthesis control.
Collapse
Affiliation(s)
- Rui Li
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Xiaodong Zhang
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Zhufeng Lu
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Chang Liu
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Hanzhe Li
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Weihua Sheng
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, United States
- Shenzhen Academy of Robotics, Shenzhen, China
| | - Randolph Odekhe
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
16
|
Neurophysiology of spontaneous facial expressions: II. Motor control of the right and left face is partially independent in adults. Cortex 2018; 111:164-182. [PMID: 30502646 DOI: 10.1016/j.cortex.2018.10.027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 10/18/2018] [Accepted: 10/31/2018] [Indexed: 12/22/2022]
Abstract
Facial expressions are described traditionally as monolithic or unitary entities. However, humans have the capacity to produce facial blends of emotion in which the upper and lower face simultaneously display different expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face in each hemisphere that, presumably, also occur in humans. Using high-speed videography, we began measuring the movement dynamics of spontaneous facial expressions, including facial blends, to develop a more complete understanding of the neurophysiology underlying facial expressions. In our part 1 publication in Cortex (2016), we found that hemispheric motor control of the upper and lower face is overwhelmingly independent; 242 (99%) of the expressions were classified as demonstrating independent hemispheric motor control whereas only 3 (1%) were classified as demonstrating dependent hemispheric motor control. In this companion paper (part 2), 251 unitary facial expressions that occurred on either the upper or lower face were analyzed. 164 (65%) expressions demonstrated dependent hemispheric motor control whereas 87 (35%) expressions demonstrated independent or dual hemispheric motor control, indicating that some expressions represent facial blends of emotion that occur across the vertical facial axis. These findings also support the concepts that 1) spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical facial axis and 2) facial expressions are complex, multi-component, motoric events. Based on the Emotion-type hypothesis of cerebral lateralization, we propose that facial expressions modulated by a primary-emotional response to an environmental event are initiated by the right hemisphere on the left side of the face whereas facial expressions modulated by a social-emotional response to an environmental event are initiated by the left hemisphere on the right side of the face.
Collapse
|
17
|
Boutsen FA, Dvorak JD, Pulusu VK, Ross ED. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions. Vision Res 2017; 133:150-160. [PMID: 28279711 DOI: 10.1016/j.visres.2016.07.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Revised: 05/16/2016] [Accepted: 07/09/2016] [Indexed: 10/20/2022]
Abstract
Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning.
Collapse
Affiliation(s)
- Frank A Boutsen
- Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA
| | - Justin D Dvorak
- Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA
| | - Vinay K Pulusu
- Department of Neurology, University of Oklahoma Health Sciences Center, and the VA Medical Center (127), 921 NE 13th Street, Oklahoma City, OK 73104, USA
| | - Elliott D Ross
- Department of Neurology, University of Oklahoma Health Sciences Center, and the VA Medical Center (127), 921 NE 13th Street, Oklahoma City, OK 73104, USA; Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA.
| |
Collapse
|