1
|
Jamali R, Generosi A, Villafan JY, Mengoni M, Pelagalli L, Battista G, Martarelli M, Chiariotti P, Mansi SA, Arnesano M, Castellini P. Facial Expression Recognition for Measuring Jurors' Attention in Acoustic Jury Tests. SENSORS (BASEL, SWITZERLAND) 2024; 24:2298. [PMID: 38610510 PMCID: PMC11014261 DOI: 10.3390/s24072298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/26/2024] [Accepted: 03/30/2024] [Indexed: 04/14/2024]
Abstract
The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.
Collapse
Affiliation(s)
- Reza Jamali
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Andrea Generosi
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Josè Yuri Villafan
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Maura Mengoni
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Leonardo Pelagalli
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Gianmarco Battista
- Department of Engineering and Architecture, Università di Parma, Parco Area delle Scienze 181/A, 43124 Parma, Italy;
| | - Milena Martarelli
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| | - Paolo Chiariotti
- Department of Mechanical Engineering, Politecnico di Milano, Via Privata Giuseppe La Masa, 1, 20156 Milano, Italy;
| | - Silvia Angela Mansi
- Università Telematica eCampus, via Isimbardi 10, 22060 Novedrate, Italy; (S.A.M.); (M.A.)
| | - Marco Arnesano
- Università Telematica eCampus, via Isimbardi 10, 22060 Novedrate, Italy; (S.A.M.); (M.A.)
| | - Paolo Castellini
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; (R.J.); (J.Y.V.); (M.M.); (L.P.); (M.M.); (P.C.)
| |
Collapse
|
2
|
Word recognition through mapping of lip movements from speech utterance using audiovisual fusion and MLP. Int J Health Sci (Qassim) 2022. [DOI: 10.53730/ijhs.v6ns2.6078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Speech has information more than text, but under noisy environment speech sufferance from disadvantage of not properly decoded by humans and same is true with machines. speech being bimodal along with audio features if we augment visual features specifically related to lip movements. the degree of speech recognition can be improved. The objective of this work is to use audio and visual features to aid word recognition. In this work we extracted MFCC features for audio and Geometrical features of lip movements together is used in machine learning algorithm to predict the word utterances. Videos related to word utterances are extracted from TIMID database. With the statistical information related to audio and corresponding visual features from lip movements is extracted to form input feature vector to machine learning algorithm (Multi-layer perceptron). The experimental results show that using MLP we have obtained a word recognition accuracy of 91% and using KNN Classifier the accuracy attained is 61%. The results presented here have important implications for applications in HMI communication and helps hearing impaired.
Collapse
|
3
|
Does One Size Fit All? A Case Study to Discuss Findings of an Augmented Hands-Free Robot Teleoperation Concept for People with and without Motor Disabilities. TECHNOLOGIES 2022. [DOI: 10.3390/technologies10010004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Hands-free robot teleoperation and augmented reality have the potential to create an inclusive environment for people with motor disabilities. It may allow them to teleoperate robotic arms to manipulate objects. However, the experiences evoked by the same teleoperation concept and augmented reality can vary significantly for people with motor disabilities compared to those without disabilities. In this paper, we report the experiences of Miss L., a person with multiple sclerosis, when teleoperating a robotic arm in a hands-free multimodal manner using a virtual menu and visual hints presented through the Microsoft HoloLens 2. We discuss our findings and compare her experiences to those of people without disabilities using the same teleoperation concept. Additionally, we present three learning points from comparing these experiences: a re-evaluation of the metrics used to measure performance, being aware of the bias, and considering variability in abilities, which evokes different experiences. We consider these learning points can be extrapolated to carrying human–robot interaction evaluations with mixed groups of participants with and without disabilities.
Collapse
|
4
|
Gideon J, McInnis MG, Provost EM. Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG). IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2021; 12:1055-1068. [PMID: 35695825 PMCID: PMC9173710 DOI: 10.1109/taffc.2019.2916092] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatic speech emotion recognition provides computers with critical context to enable user understanding. While methods trained and tested within the same dataset have been shown successful, they often fail when applied to unseen datasets. To address this, recent work has focused on adversarial methods to find more generalized representations of emotional speech. However, many of these methods have issues converging, and only involve datasets collected in laboratory conditions. In this paper, we introduce Adversarial Discriminative Domain Generalization (ADDoG), which follows an easier to train "meet in the middle" approach. The model iteratively moves representations learned for each dataset closer to one another, improving cross-dataset generalization. We also introduce Multiclass ADDoG, or MADDoG, which is able to extend the proposed method to more than two datasets, simultaneously. Our results show consistent convergence for the introduced methods, with significantly improved results when not using labels from the target dataset. We also show how, in most cases, ADDoG and MADDoG can be used to improve upon baseline state-of-the-art methods when target dataset labels are added and in-the-wild data are considered. Even though our experiments focus on cross-corpus speech emotion, these methods could be used to remove unwanted factors of variation in other settings.
Collapse
|
5
|
Guha T, Yang Z, Grossman RB, Narayanan SS. A Computational Study of Expressive Facial Dynamics in Children with Autism. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2018; 9:14-20. [PMID: 29963280 PMCID: PMC6022860 DOI: 10.1109/taffc.2016.2578316] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Several studies have established that facial expressions of children with autism are often perceived as atypical, awkward or less engaging by typical adult observers. Despite this clear deficit in the quality of facial expression production, very little is understood about its underlying mechanisms and characteristics. This paper takes a computational approach to studying details of facial expressions of children with high functioning autism (HFA). The objective is to uncover those characteristics of facial expressions, notably distinct from those in typically developing children, and which are otherwise difficult to detect by visual inspection. We use motion capture data obtained from subjects with HFA and typically developing subjects while they produced various facial expressions. This data is analyzed to investigate how the overall and local facial dynamics of children with HFA differ from their typically developing peers. Our major observations include reduced complexity in the dynamic facial behavior of the HFA group arising primarily from the eye region.
Collapse
Affiliation(s)
- Tanaya Guha
- Department of Electrical Engineering, Indian Institute of Technology Kanpur, India
| | - Zhaojun Yang
- Signal Analysis and Interpretation Lab (SAIL), University of Southern California, Los Angeles
| | | | - Shrikanth S Narayanan
- Signal Analysis and Interpretation Lab (SAIL), University of Southern California, Los Angeles
| |
Collapse
|
6
|
Liaci E, Fischer A, Heinrichs M, van Elst LT, Kornmeier J. Mona Lisa is always happy - and only sometimes sad. Sci Rep 2017; 7:43511. [PMID: 28281547 PMCID: PMC5345090 DOI: 10.1038/srep43511] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 01/25/2017] [Indexed: 11/08/2022] Open
Abstract
The worldwide fascination of da Vinci's Mona Lisa has been dedicated to the emotional ambiguity of her face expression. In the present study we manipulated Mona Lisa's mouth curvature as one potential source of ambiguity and studied how a range of happier and sadder face variants influences perception. In two experimental conditions we presented different stimulus ranges with different step sizes between stimuli along the happy-sad axis of emotional face expressions. Stimuli were presented in random order and participants indicated the perceived emotional face expression (first task) and the confidence of their response (second task). The probability of responding 'happy' to the original Mona Lisa was close to 100%. Furthermore, in both conditions the perceived happiness of Mona Lisa variants described sigmoidal functions of the mouth curvature. Participants' confidence was weakest around the sigmoidal inflection points. Remarkably, the sigmoidal functions, as well as confidence values and reaction times, differed significantly between experimental conditions. Finally, participants responded generally faster to happy than to sad faces. Overall, the original Mona Lisa seems to be less ambiguous than expected. However, perception of and reaction to the emotional face content is relative and strongly depends on the used stimulus range.
Collapse
Affiliation(s)
- Emanuela Liaci
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
- Eye Center, Medical Center, University of Freiburg, Freiberg, Germany
- Center for Mental Disorders, Medical Center, University of Freiburg, Freiberg, Germany
- Faculty of Medicine, University of Freiburg, Germany
| | - Andreas Fischer
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
| | - Markus Heinrichs
- Laboratory for Biological and Personality Psychology, Department of Psychology, University of Freiburg, Freiberg, Germany
| | - Ludger Tebartz van Elst
- Center for Mental Disorders, Medical Center, University of Freiburg, Freiberg, Germany
- Faculty of Medicine, University of Freiburg, Germany
| | - Jürgen Kornmeier
- Institute for Frontier Areas of Psychology and Mental Health, Freiburg, Germany
- Eye Center, Medical Center, University of Freiburg, Freiberg, Germany
- Center for Mental Disorders, Medical Center, University of Freiburg, Freiberg, Germany
- Faculty of Medicine, University of Freiburg, Germany
| |
Collapse
|
7
|
Effects of the Lee Silverman Voice Treatment (LSVT® LOUD) on hypomimia in Parkinson's disease. J Int Neuropsychol Soc 2014; 20:302-12. [PMID: 24524211 DOI: 10.1017/s1355617714000046] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Given associations between facial movement and voice, the potential of the Lee Silverman Voice Treatment (LSVT) to alleviate decreased facial expressivity, termed hypomimia, in Parkinson's disease (PD) was examined. Fifty-six participants--16 PD participants who underwent LSVT, 12 PD participants who underwent articulation treatment (ARTIC), 17 untreated PD participants, and 11 controls without PD--produced monologues about happy emotional experiences at pre- and post-treatment timepoints ("T1" and "T2," respectively), 1 month apart. The groups of LSVT, ARTIC, and untreated PD participants were matched on demographic and health status variables. The frequency and variability of facial expressions (Frequency and Variability) observable on 1-min monologue videorecordings were measured using the Facial Action Coding System (FACS). At T1, the Frequency and Variability of participants with PD were significantly lower than those of controls. Frequency and Variability increases of LSVT participants from T1 to T2 were significantly greater than those of ARTIC or untreated participants. Whereas the Frequency and Variability of ARTIC participants at T2 were significantly lower than those of controls, LSVT participants did not significantly differ from controls on these variables at T2. The implications of these findings, which suggest that LSVT reduces parkinsonian hypomimia, for PD-related psychosocial problems are considered.
Collapse
|
8
|
Narayanan S, Georgiou PG. Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language: Computational techniques are presented to analyze and model expressed and perceived human behavior-variedly characterized as typical, atypical, distressed, and disordered-from speech and language cues and their applications in health, commerce, education, and beyond. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2013; 101:1203-1233. [PMID: 24039277 PMCID: PMC3769794 DOI: 10.1109/jproc.2012.2236291] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.
Collapse
Affiliation(s)
- Shrikanth Narayanan
- The authors are with the Ming Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Panayiotis G. Georgiou
- The authors are with the Ming Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| |
Collapse
|
9
|
Mariooryad S, Busso C. Generating Human-Like Behaviors Using Joint, Speech-Driven Models for Conversational Agents. ACTA ACUST UNITED AC 2012. [DOI: 10.1109/tasl.2012.2201476] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
10
|
HALDER ANISHA, CHAKRABORTY ARUNA, KONAR AMIT, SHAW SRISTI. A NEW APPROACH TO EMOTION RECOGNITION FROM THE LIP CONTOUR OF A SUBJECT. INT J ARTIF INTELL T 2012. [DOI: 10.1142/s0218213012400027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes an alternative approach to emotion recognition from the outer lip contour of the subjects. Subjects exhibit their emotions through their facial expressions, and the lip region is segmented from their facial images. A lip contour model has been developed, and the parameters of the model are adapted using differential evolution algorithm to match with the actual outer contour of the lip. An SVM classifier is then employed to classify the emotion of the subject from the parameter set of the subjects' lip contour. The experiment was performed on 50 subjects in the age group 20–30 years, and the worst accuracy in emotion classification is found to be 86%.
Collapse
Affiliation(s)
- ANISHA HALDER
- Department of Electronics and Tele-Communication Engineering, Jadavpur University, Kolkata-32, India
| | - ARUNA CHAKRABORTY
- Department of Computer Science and Engineering, St. Thomas' College of Engineering and Technology, Kolkata, India
| | - AMIT KONAR
- Department of Electronics and Tele-Communication Engineering, Jadavpur University, Kolkata-32, India
| | - SRISTI SHAW
- Department of Electronics and Tele-Communication Engineering, Jadavpur University, Kolkata-32, India
| |
Collapse
|
11
|
Emotion-Aware Assistive System for Humanistic Care Based on the Orange Computing Concept. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2012. [DOI: 10.1155/2012/183610] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Mental care has become crucial with the rapid growth of economy and technology. However, recent movements, such as green technologies, place more emphasis on environmental issues than on mental care. Therefore, this study presents an emerging technology called orange computing for mental care applications. Orange computing refers to health, happiness, and physiopsychological care computing, which focuses on designing algorithms and systems for enhancing body and mind balance. The representative color of orange computing originates from a harmonic fusion of passion, love, happiness, and warmth. A case study on a human-machine interactive and assistive system for emotion care was conducted in this study to demonstrate the concept of orange computing. The system can detect emotional states of users by analyzing their facial expressions, emotional speech, and laughter in a ubiquitous environment. In addition, the system can provide corresponding feedback to users according to the results. Experimental results show that the system can achieve an accurate audiovisual recognition rate of 81.8% on average, thereby demonstrating the feasibility of the system. Compared with traditional questionnaire-based approaches, the proposed system can offer real-time analysis of emotional status more efficiently.
Collapse
|
12
|
Espevik R, Johnsen BH, Eid J. Communication and Performance in Co-Located and Distributed Teams: An Issue of Shared Mental Models of Team Members? MILITARY PSYCHOLOGY 2011. [DOI: 10.1080/08995605.2011.616792] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Roar Espevik
- a Department of Psychosocial Science , University of Bergen , Bergen , Norway
| | - Bjørn Helge Johnsen
- a Department of Psychosocial Science , University of Bergen , Bergen , Norway
| | - Jarle Eid
- a Department of Psychosocial Science , University of Bergen , Bergen , Norway
| |
Collapse
|
13
|
Abstract
Persistent developmental stuttering (PDS) is a common disorder of speech with no identifiable cause. Psychiatric disorders appear to be related and influence clinical manifestation of PDS. In this case report, we present the clinical evolution of 1 PDS patient submitted to pharmacological treatment with fluoxetine and speech therapy intervention. At the end of 12 weeks of treatment, she evolved from 28 at Beck Depression Inventory; 32 in the Hamilton Scale for Anxiety; 43 and 47, respectively, in the anxiety and avoidance components of the Liebowitz Social Anxiety Scale; and severe speech impairment according Iowa Scale, to 12 at Beck Depression Inventory; 8 at Hamilton Scale for Anxiety; 25 and 21 at Liebowitz Social Anxiety Scale anxiety and avoidance components, respectively; and moderate speech impairment. Diagnosing and treating psychiatric symptoms in addition to speech therapy appears to be the best therapeutic approach.
Collapse
|
14
|
Al Abdulmohsen T, Kruger THC. The contribution of muscular and auditory pathologies to the symptomatology of autism. Med Hypotheses 2011; 77:1038-47. [PMID: 21925796 DOI: 10.1016/j.mehy.2011.08.044] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2011] [Accepted: 08/22/2011] [Indexed: 11/16/2022]
Abstract
Most research concerning the pathology of autism is focused on the search for central abnormalities that account for the production of symptoms. We, however, instead of looking at muscular and auditory features as merely associated manifestations, propose that they are somatic contributors by which some of the main clinical features of autism might be explained. Evidence suggests that muscles affect emotional experience. We think certain muscular dysfunctioning can impair communication and social interaction, and create stereotypic behavior, giving rise to the diagnostic features of autism. Furthermore, because speech is synchronized with facial movements and voice is controlled mainly through auditory feedback, a distortion of auditory feedback could disrupt voice, which in turn might cause parallel abnormal facial muscle functioning.
Collapse
Affiliation(s)
- Taleb Al Abdulmohsen
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hanover Medical School, Carl-Neuberg Str 1, 30625 Hannover, Germany.
| | | |
Collapse
|
15
|
Jia J, Zhang S, Meng F, Wang Y, Cai L. Emotional Audio-Visual Speech Synthesis Based on PAD. ACTA ACUST UNITED AC 2011. [DOI: 10.1109/tasl.2010.2052246] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
16
|
Al Abdulmohsen T. Aberration in hearing one’s own voice can cause not only stuttering but also depression. Med Hypotheses 2010; 74:784-8. [DOI: 10.1016/j.mehy.2009.10.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2009] [Accepted: 10/11/2009] [Indexed: 10/20/2022]
|
17
|
Chakraborty A, Konar A, Chakraborty U, Chatterjee A. Emotion Recognition From Facial Expressions and Its Control Using Fuzzy Logic. ACTA ACUST UNITED AC 2009. [DOI: 10.1109/tsmca.2009.2014645] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
18
|
Busso C, Bulut M, Lee CC, Kazemzadeh A, Mower E, Kim S, Chang JN, Lee S, Narayanan SS. IEMOCAP: interactive emotional dyadic motion capture database. LANG RESOUR EVAL 2008. [DOI: 10.1007/s10579-008-9076-6] [Citation(s) in RCA: 1002] [Impact Index Per Article: 62.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|