1
|
Yan Z, Song Y, Zhou R, Wang L, Wang Z, Dai Z. Facial Expression Realization of Humanoid Robot Head and Strain-Based Anthropomorphic Evaluation of Robot Facial Expressions. Biomimetics (Basel) 2024; 9:122. [PMID: 38534807 DOI: 10.3390/biomimetics9030122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 02/09/2024] [Accepted: 02/18/2024] [Indexed: 03/28/2024] Open
Abstract
The facial expressions of humanoid robots play a crucial role in human-computer information interactions. However, there is a lack of quantitative evaluation methods for the anthropomorphism of robot facial expressions. In this study, we designed and manufactured a humanoid robot head that was capable of successfully realizing six basic facial expressions. The driving force behind the mechanism was efficiently transmitted to the silicone skin through a rigid linkage drive and snap button connection, which improves both the driving efficiency and the lifespan of the silicone skin. We used human facial expressions as a basis for simulating and acquiring the movement parameters. Subsequently, we designed a control system for the humanoid robot head in order to achieve these facial expressions. Moreover, we used a flexible vertical graphene sensor to measure strain on both the human face and the silicone skin of the humanoid robot head. We then proposed a method to evaluate the anthropomorphic degree of the robot's facial expressions by using the difference rate of strain. The feasibility of this method was confirmed through experiments in facial expression recognition. The evaluation results indicated a high degree of anthropomorphism for the six basic facial expressions which were achieved by the humanoid robot head. Moreover, this study also investigates factors affecting the reproduction of expressions. Finally, the impulse was calculated based on the strain curves of the energy consumption of the humanoid robot head to complete different facial expressions. This offers a reference for fellow researchers when designing humanoid robot heads, based on energy consumption ratios. To conclude, this paper offers data references for optimizing the mechanisms and selecting the drive components of the humanoid robot head. This was realized by considering the anthropomorphic degree and energy consumption of each part. Additionally, a new method for evaluating robot facial expressions is proposed.
Collapse
Affiliation(s)
- Zhibin Yan
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Yi Song
- School of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Rui Zhou
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Liuwei Wang
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Zhiliang Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Zhendong Dai
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
- Institute of Bio-Inspired Structure and Surface Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| |
Collapse
|
2
|
Ishihara H. Objective evaluation of mechanical expressiveness in android and human faces. Adv Robot 2022. [DOI: 10.1080/01691864.2022.2103389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
3
|
Auflem M, Kohtala S, Jung M, Steinert M. Facing the FACS-Using AI to Evaluate and Control Facial Action Units in Humanoid Robot Face Development. Front Robot AI 2022; 9:887645. [PMID: 35774595 PMCID: PMC9237251 DOI: 10.3389/frobt.2022.887645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 05/11/2022] [Indexed: 11/13/2022] Open
Abstract
This paper presents a new approach for evaluating and controlling expressive humanoid robotic faces using open-source computer vision and machine learning methods. Existing research in Human-Robot Interaction lacks flexible and simple tools that are scalable for evaluating and controlling various robotic faces; thus, our goal is to demonstrate the use of readily available AI-based solutions to support the process. We use a newly developed humanoid robot prototype intended for medical training applications as a case example. The approach automatically captures the robot’s facial action units through a webcam during random motion, which are components traditionally used to describe facial muscle movements in humans. Instead of manipulating the actuators individually or training the robot to express specific emotions, we propose using action units as a means for controlling the robotic face, which enables a multitude of ways to generate dynamic motion, expressions, and behavior. The range of action units achieved by the robot is thus analyzed to discover its expressive capabilities and limitations and to develop a control model by correlating action units to actuation parameters. Because the approach is not dependent on specific facial attributes or actuation capabilities, it can be used for different designs and continuously inform the development process. In healthcare training applications, our goal is to establish a prerequisite of expressive capabilities of humanoid robots bounded by industrial and medical design constraints. Furthermore, to mediate human interpretation and thus enable decision-making based on observed cognitive, emotional, and expressive cues, our approach aims to find the minimum viable expressive capabilities of the robot without having to optimize for realism. The results from our case example demonstrate the flexibility and efficiency of the presented AI-based solutions to support the development of humanoid facial robots.
Collapse
Affiliation(s)
- Marius Auflem
- TrollLABS, Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Sampsa Kohtala
- TrollLABS, Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Malte Jung
- Robots in Groups Lab, Department of Information Science, Cornell University, Ithaca, NY, United States
| | - Martin Steinert
- TrollLABS, Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| |
Collapse
|
4
|
Sato W, Namba S, Yang D, Nishida S, Ishi C, Minato T. An Android for Emotional Interaction: Spatiotemporal Validation of Its Facial Expressions. Front Psychol 2022; 12:800657. [PMID: 35185697 PMCID: PMC8855677 DOI: 10.3389/fpsyg.2021.800657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Accepted: 12/21/2021] [Indexed: 11/13/2022] Open
Abstract
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids' facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola's emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- Field Science Education and Research Center, Kyoto University, Kyoto, Japan
| | - Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Dongsheng Yang
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Shin’ya Nishida
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Carlos Ishi
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Takashi Minato
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| |
Collapse
|
5
|
Namba S, Sato W, Osumi M, Shimokawa K. Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases. SENSORS (BASEL, SWITZERLAND) 2021; 21:4222. [PMID: 34203007 PMCID: PMC8235167 DOI: 10.3390/s21124222] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 06/15/2021] [Accepted: 06/17/2021] [Indexed: 11/16/2022]
Abstract
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan
| | - Wataru Sato
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan; (M.O.); (K.S.)
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan; (M.O.); (K.S.)
| |
Collapse
|