1
|
Zhao Y, Huang Z, Seligman M, Peng K. Risk and prosocial behavioural cues elicit human-like response patterns from AI chatbots. Sci Rep 2024; 14:7095. [PMID: 38528008 DOI: 10.1038/s41598-024-55949-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 02/29/2024] [Indexed: 03/27/2024] Open
Abstract
Emotions, long deemed a distinctly human characteristic, guide a repertoire of behaviors, e.g., promoting risk-aversion under negative emotional states or generosity under positive ones. The question of whether Artificial Intelligence (AI) can possess emotions remains elusive, chiefly due to the absence of an operationalized consensus on what constitutes 'emotion' within AI. Adopting a pragmatic approach, this study investigated the response patterns of AI chatbots-specifically, large language models (LLMs)-to various emotional primes. We engaged AI chatbots as one would human participants, presenting scenarios designed to elicit positive, negative, or neutral emotional states. Multiple accounts of OpenAI's ChatGPT Plus were then tasked with responding to inquiries concerning investment decisions and prosocial behaviors. Our analysis revealed that ChatGPT-4 bots, when primed with positive, negative, or neutral emotions, exhibited distinct response patterns in both risk-taking and prosocial decisions, a phenomenon less evident in the ChatGPT-3.5 iterations. This observation suggests an enhanced capacity for modulating responses based on emotional cues in more advanced LLMs. While these findings do not suggest the presence of emotions in AI, they underline the feasibility of swaying AI responses by leveraging emotional indicators.
Collapse
Affiliation(s)
- Yukun Zhao
- Positive Psychology Research Center, School of Social Sciences, Tsinghua University, Beijing, China
| | - Zhen Huang
- Positive Psychology Research Center, School of Social Sciences, Tsinghua University, Beijing, China
| | - Martin Seligman
- Department of Psychology, University of Pennsylvania, Philadelphia, USA
| | - Kaiping Peng
- Department of Psychology, Tsinghua University, 5th Floor, Weiqing Building, Beijing, 100084, China.
| |
Collapse
|
2
|
Pérez-Zuñiga G, Arce D, Gibaja S, Alvites M, Cano C, Bustamante M, Horna I, Paredes R, Cuellar F. Qhali: A Humanoid Robot for Assisting in Mental Health Treatment. SENSORS (BASEL, SWITZERLAND) 2024; 24:1321. [PMID: 38400478 PMCID: PMC10891936 DOI: 10.3390/s24041321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/13/2024] [Accepted: 02/13/2024] [Indexed: 02/25/2024]
Abstract
In recent years, social assistive robots have gained significant acceptance in healthcare settings, particularly for tasks such as patient care and monitoring. This paper offers a comprehensive overview of the expressive humanoid robot, Qhali, with a focus on its industrial design, essential components, and validation in a controlled environment. The industrial design phase encompasses research, ideation, design, manufacturing, and implementation. Subsequently, the mechatronic system is detailed, covering sensing, actuation, control, energy, and software interface. Qhali's capabilities include autonomous execution of routines for mental health promotion and psychological testing. The software platform enables therapist-directed interventions, allowing the robot to convey emotional gestures through joint and head movements and simulate various facial expressions for more engaging interactions. Finally, with the robot fully operational, an initial behavioral experiment was conducted to validate Qhali's capability to deliver telepsychological interventions. The findings from this preliminary study indicate that participants reported enhancements in their emotional well-being, along with positive outcomes in their perception of the psychological intervention conducted with the humanoid robot.
Collapse
Affiliation(s)
- Gustavo Pérez-Zuñiga
- Engineering Department, Pontificia Universidad Católica del Perú, San Miguel, Lima 15088, Peru; (M.A.); (M.B.); (I.H.); (F.C.)
| | - Diego Arce
- Engineering Department, Pontificia Universidad Católica del Perú, San Miguel, Lima 15088, Peru; (M.A.); (M.B.); (I.H.); (F.C.)
| | - Sareli Gibaja
- Department of Psychology, Pontificia Universidad Catolica del Peru, San Miguel, Lima 15088, Peru; (S.G.); (R.P.)
| | - Marcelo Alvites
- Engineering Department, Pontificia Universidad Católica del Perú, San Miguel, Lima 15088, Peru; (M.A.); (M.B.); (I.H.); (F.C.)
| | - Consuelo Cano
- Department of Art and Design, Pontificia Universidad Catolica del Peru, San Miguel, Lima 15088, Peru;
| | - Marlene Bustamante
- Engineering Department, Pontificia Universidad Católica del Perú, San Miguel, Lima 15088, Peru; (M.A.); (M.B.); (I.H.); (F.C.)
| | - Ingrid Horna
- Engineering Department, Pontificia Universidad Católica del Perú, San Miguel, Lima 15088, Peru; (M.A.); (M.B.); (I.H.); (F.C.)
| | - Renato Paredes
- Department of Psychology, Pontificia Universidad Catolica del Peru, San Miguel, Lima 15088, Peru; (S.G.); (R.P.)
| | - Francisco Cuellar
- Engineering Department, Pontificia Universidad Católica del Perú, San Miguel, Lima 15088, Peru; (M.A.); (M.B.); (I.H.); (F.C.)
| |
Collapse
|
3
|
Lee JP, Jang H, Jang Y, Song H, Lee S, Lee PS, Kim J. Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface. Nat Commun 2024; 15:530. [PMID: 38225246 PMCID: PMC10789773 DOI: 10.1038/s41467-023-44673-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 12/28/2023] [Indexed: 01/17/2024] Open
Abstract
Human affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information. Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data. This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed. With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment.
Collapse
Affiliation(s)
- Jin Pyo Lee
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Hanhyeok Jang
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Yeonwoo Jang
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Hyeonseo Song
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Suwoo Lee
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Pooi See Lee
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore.
| | - Jiyun Kim
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea.
- Center for Multidimensional Programmable Matter, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea.
| |
Collapse
|
4
|
Dosso JA, Riminchan A, Robillard JM. Social robotics for children: an investigation of manufacturers' claims. Front Robot AI 2023; 10:1080157. [PMID: 38187475 PMCID: PMC10770258 DOI: 10.3389/frobt.2023.1080157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 11/15/2023] [Indexed: 01/09/2024] Open
Abstract
As the market for commercial children's social robots grows, manufacturers' claims around the functionality and outcomes of their products have the potential to impact consumer purchasing decisions. In this work, we qualitatively and quantitatively assess the content and scientific support for claims about social robots for children made on manufacturers' websites. A sample of 21 robot websites was obtained using location-independent keyword searches on Google, Yahoo, and Bing from April to July 2021. All claims made on manufacturers' websites about robot functionality and outcomes (n = 653 statements) were subjected to content analysis, and the quality of evidence for these claims was evaluated using a validated quality evaluation tool. Social robot manufacturers made clear claims about the impact of their products in the areas of interaction, education, emotion, and adaptivity. Claims tended to focus on the child rather than the parent or other users. Robots were primarily described in the context of interactive, educational, and emotional uses, rather than being for health, safety, or security. The quality of the information used to support these claims was highly variable and at times potentially misleading. Many websites used language implying that robots had interior thoughts and experiences; for example, that they would love the child. This study provides insight into the content and quality of parent-facing manufacturer claims regarding commercial social robots for children.
Collapse
Affiliation(s)
- Jill A. Dosso
- Neuroscience, Engagement, and Smart Tech (NEST) Laboratory, Department of Medicine, Division of Neurology, The University of British Columbia, Vancouver, BC, Canada
- Neuroscience, Engagement, and Smart Tech (NEST) Laboratory, British Columbia Children’s and Women’s Hospital, Vancouver, BC, Canada
| | - Anna Riminchan
- Neuroscience, Engagement, and Smart Tech (NEST) Laboratory, Department of Medicine, Division of Neurology, The University of British Columbia, Vancouver, BC, Canada
- Neuroscience, Engagement, and Smart Tech (NEST) Laboratory, British Columbia Children’s and Women’s Hospital, Vancouver, BC, Canada
| | - Julie M. Robillard
- Neuroscience, Engagement, and Smart Tech (NEST) Laboratory, Department of Medicine, Division of Neurology, The University of British Columbia, Vancouver, BC, Canada
- Neuroscience, Engagement, and Smart Tech (NEST) Laboratory, British Columbia Children’s and Women’s Hospital, Vancouver, BC, Canada
| |
Collapse
|
5
|
Li YT, Yeh SL, Huang TR. The cross-race effect in automatic facial expression recognition violates measurement invariance. Front Psychol 2023; 14:1201145. [PMID: 38130968 PMCID: PMC10733503 DOI: 10.3389/fpsyg.2023.1201145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 10/30/2023] [Indexed: 12/23/2023] Open
Abstract
Emotion has been a subject undergoing intensive research in psychology and cognitive neuroscience over several decades. Recently, more and more studies of emotion have adopted automatic rather than manual methods of facial emotion recognition to analyze images or videos of human faces. Compared to manual methods, these computer-vision-based, automatic methods can help objectively and rapidly analyze a large amount of data. These automatic methods have also been validated and believed to be accurate in their judgments. However, these automatic methods often rely on statistical learning models (e.g., deep neural networks), which are intrinsically inductive and thus suffer from problems of induction. Specifically, the models that were trained primarily on Western faces may not generalize well to accurately judge Eastern faces, which can then jeopardize the measurement invariance of emotions in cross-cultural studies. To demonstrate such a possibility, the present study carries out a cross-racial validation of two popular facial emotion recognition systems-FaceReader and DeepFace-using two Western and two Eastern face datasets. Although both systems could achieve overall high accuracies in the judgments of emotion category on the Western datasets, they performed relatively poorly on the Eastern datasets, especially in recognition of negative emotions. While these results caution the use of these automatic methods of emotion recognition on non-Western faces, the results also suggest that the measurements of happiness outputted by these automatic methods are accurate and invariant across races and hence can still be utilized for cross-cultural studies of positive psychology.
Collapse
Affiliation(s)
- Yen-Ting Li
- Department of Psychology, National Taiwan University, Taipei City, Taiwan
| | - Su-Ling Yeh
- Department of Psychology, National Taiwan University, Taipei City, Taiwan
- Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei City, Taiwan
- Neurobiology and Cognitive Science Center, National Taiwan University, Taipei City, Taiwan
- Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei City, Taiwan
| | - Tsung-Ren Huang
- Department of Psychology, National Taiwan University, Taipei City, Taiwan
- Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei City, Taiwan
- Neurobiology and Cognitive Science Center, National Taiwan University, Taipei City, Taiwan
- Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei City, Taiwan
| |
Collapse
|
6
|
Wang J, Chen Y, Huo S, Mai L, Jia F. Research Hotspots and Trends of Social Robot Interaction Design: A Bibliometric Analysis. SENSORS (BASEL, SWITZERLAND) 2023; 23:9369. [PMID: 38067743 PMCID: PMC10708843 DOI: 10.3390/s23239369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 12/18/2023]
Abstract
(1) Background: Social robot interaction design is crucial for determining user acceptance and experience. However, few studies have systematically discussed the current focus and future research directions of social robot interaction design from a bibliometric perspective. Therefore, we conducted this study in order to identify the latest research progress and evolution trajectory of research hotspots in social robot interaction design over the last decade. (2) Methods: We conducted a comprehensive review based on 2416 papers related to social robot interaction design obtained from the Web of Science (WOS) database. Our review utilized bibliometric techniques and integrated VOSviewer and CiteSpace to construct a knowledge map. (3) Conclusions: The current research hotspots of social robot interaction design mainly focus on #1 the study of human-robot relationships in social robots, #2 research on the emotional design of social robots, #3 research on social robots for children's psychotherapy, #4 research on companion robots for elderly rehabilitation, and #5 research on educational social robots. The reference co-citation analysis identifies the classic literature that forms the basis of the current research, which provides theoretical guidance and methods for the current research. Finally, we discuss several future research directions and challenges in this field.
Collapse
Affiliation(s)
- Jianmin Wang
- College of Arts and Media, Tongji University, Shanghai 201804, China
- Shenzhen Research Institute, Sun Yat-Sen University, Shenzhen 518057, China
| | - Yongkang Chen
- College of Design and Innovation, Tongji University, Shanghai 200092, China; (Y.C.)
| | - Siguang Huo
- College of Design and Innovation, Tongji University, Shanghai 200092, China; (Y.C.)
| | - Liya Mai
- College of Design and Innovation, Tongji University, Shanghai 200092, China; (Y.C.)
| | - Fusheng Jia
- College of Design and Innovation, Tongji University, Shanghai 200092, China; (Y.C.)
| |
Collapse
|
7
|
Krpan D, Booth JE, Damien A. The positive-negative-competence (PNC) model of psychological responses to representations of robots. Nat Hum Behav 2023; 7:1933-1954. [PMID: 37783891 PMCID: PMC10663151 DOI: 10.1038/s41562-023-01705-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 08/25/2023] [Indexed: 10/04/2023]
Abstract
Robots are becoming an increasingly prominent part of society. Despite their growing importance, there exists no overarching model that synthesizes people's psychological reactions to robots and identifies what factors shape them. To address this, we created a taxonomy of affective, cognitive and behavioural processes in response to a comprehensive stimulus sample depicting robots from 28 domains of human activity (for example, education, hospitality and industry) and examined its individual difference predictors. Across seven studies that tested 9,274 UK and US participants recruited via online panels, we used a data-driven approach combining qualitative and quantitative techniques to develop the positive-negative-competence model, which categorizes all psychological processes in response to the stimulus sample into three dimensions: positive, negative and competence-related. We also established the main individual difference predictors of these dimensions and examined the mechanisms for each predictor. Overall, this research provides an in-depth understanding of psychological functioning regarding representations of robots.
Collapse
Affiliation(s)
- Dario Krpan
- Department of Psychological and Behavioural Science, London School of Economics and Political Science, London, UK.
| | - Jonathan E Booth
- Department of Management, London School of Economics and Political Science, London, UK
| | - Andreea Damien
- Department of Psychological and Behavioural Science, London School of Economics and Political Science, London, UK
| |
Collapse
|
8
|
Su H, Qi W, Chen J, Yang C, Sandoval J, Laribi MA. Recent advancements in multimodal human-robot interaction. Front Neurorobot 2023; 17:1084000. [PMID: 37250671 PMCID: PMC10210148 DOI: 10.3389/fnbot.2023.1084000] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Accepted: 04/20/2023] [Indexed: 05/31/2023] Open
Abstract
Robotics have advanced significantly over the years, and human-robot interaction (HRI) is now playing an important role in delivering the best user experience, cutting down on laborious tasks, and raising public acceptance of robots. New HRI approaches are necessary to promote the evolution of robots, with a more natural and flexible interaction manner clearly the most crucial. As a newly emerging approach to HRI, multimodal HRI is a method for individuals to communicate with a robot using various modalities, including voice, image, text, eye movement, and touch, as well as bio-signals like EEG and ECG. It is a broad field closely related to cognitive science, ergonomics, multimedia technology, and virtual reality, with numerous applications springing up each year. However, little research has been done to summarize the current development and future trend of HRI. To this end, this paper systematically reviews the state of the art of multimodal HRI on its applications by summing up the latest research articles relevant to this field. Moreover, the research development in terms of the input signal and the output signal is also covered in this manuscript.
Collapse
Affiliation(s)
- Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Wen Qi
- School of Future Technology, South China University of Technology, Guangzhou, China
| | - Jiahao Chen
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chenguang Yang
- Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom
| | - Juan Sandoval
- Department of GMSC, Pprime Institute, CNRS, ENSMA, University of Poitiers, Poitiers, France
| | - Med Amine Laribi
- Department of GMSC, Pprime Institute, CNRS, ENSMA, University of Poitiers, Poitiers, France
| |
Collapse
|
9
|
de Lope J, Graña M. An ongoing review of speech emotion recognition. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
10
|
Sun YC, Effati M, Naguib HE, Nejat G. SoftSAR: The New Softer Side of Socially Assistive Robots-Soft Robotics with Social Human-Robot Interaction Skills. SENSORS (BASEL, SWITZERLAND) 2022; 23:432. [PMID: 36617030 PMCID: PMC9824785 DOI: 10.3390/s23010432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/20/2022] [Accepted: 12/26/2022] [Indexed: 06/17/2023]
Abstract
When we think of "soft" in terms of socially assistive robots (SARs), it is mainly in reference to the soft outer shells of these robots, ranging from robotic teddy bears to furry robot pets. However, soft robotics is a promising field that has not yet been leveraged by SAR design. Soft robotics is the incorporation of smart materials to achieve biomimetic motions, active deformations, and responsive sensing. By utilizing these distinctive characteristics, a new type of SAR can be developed that has the potential to be safer to interact with, more flexible, and uniquely uses novel interaction modes (colors/shapes) to engage in a heighted human-robot interaction. In this perspective article, we coin this new collaborative research area as SoftSAR. We provide extensive discussions on just how soft robotics can be utilized to positively impact SARs, from their actuation mechanisms to the sensory designs, and how valuable they will be in informing future SAR design and applications. With extensive discussions on the fundamental mechanisms of soft robotic technologies, we outline a number of key SAR research areas that can benefit from using unique soft robotic mechanisms, which will result in the creation of the new field of SoftSAR.
Collapse
Affiliation(s)
- Yu-Chen Sun
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Smart Materials and Structures (TSMART), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
| | - Meysam Effati
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
| | - Hani E. Naguib
- Toronto Smart Materials and Structures (TSMART), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Institute of Advanced Manufacturing (TIAM), University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Rehabilitation Institute, Toronto, ON M5G 2A2, Canada
| | - Goldie Nejat
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Institute of Advanced Manufacturing (TIAM), University of Toronto, Toronto, ON M5S 3G8, Canada
- Toronto Rehabilitation Institute, Toronto, ON M5G 2A2, Canada
- Rotman Research Institute, Baycrest Health Sciences, North York, ON M6A 2E1, Canada
| |
Collapse
|
11
|
Troup LJ, Zhang W. Editorial: Methods and applications in emotion science. Front Psychol 2022; 13:1058322. [DOI: 10.3389/fpsyg.2022.1058322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 10/18/2022] [Indexed: 11/16/2022] Open
|
12
|
Mohd Lokman A, Nik Ismail NNN, Redzuan F, Abd Aziz A, Tsuchiya T. Spiritual Therapeutic Robot for Elderly With Early Alzheimer’s Disease: A Design Guide Based on Gender. MALAYSIAN JOURNAL OF MEDICINE AND HEALTH SCIENCES 2022:71-79. [DOI: 10.47836/mjmhs.18.s9.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Introduction: Researchers and technologists have been exploring ways to utilize robotic technology to aid elderly care and to increase their emotional wellbeing. Previous studies indicated that spirituality is a core factor for successful aging. Various research was done on therapeutic robots for the elderly with Alzheimer’s Disease (AD). However, little focus was given to emotions and spiritual elements perceived by different genders. Therefore, this research aims to explore spiritual therapeutic robot design elements based on the elderly’s emotional experience by different genders. Methods: The research firstly conducted expert interview involving 9 experts on elderly care, robotics, and spiritual practice; secondly, KJ Method involving 4 language, spiritual, elderly care, and robotic experts; and thirdly, qualitative and quantitative Kansei assessment (n=12) among the elderly with early AD to determine the conceptual design guide, spiritual emotion words, and finalize the design guide. Results: Two-Sample t-Test shows five of ten spiritual design elements have a p-value of 0.05, which indicates there is a 50-50 chance of a significant difference in spiritual emotional experience between male and female respondents. Further analysis shows differences in results from both genders, but shows similar scores for zikr, surah, and prayer. Conclusion: The results enabled the research to produce a gender-based design guide for therapeutic robots based on spiritual elements and emotions, to successfully evoke positive emotions among the elderly with early AD. The gender-focused design will further extend the effectiveness as it will fit the specific demands of each gender, thus effectively elevating their emotional wellbeing.
Collapse
|
13
|
Group Emotion Detection Based on Social Robot Perception. SENSORS 2022; 22:s22103749. [PMID: 35632160 PMCID: PMC9145339 DOI: 10.3390/s22103749] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/03/2022] [Accepted: 05/05/2022] [Indexed: 12/16/2022]
Abstract
Social robotics is an emerging area that is becoming present in social spaces, by introducing autonomous social robots. Social robots offer services, perform tasks, and interact with people in such social environments, demanding more efficient and complex Human–Robot Interaction (HRI) designs. A strategy to improve HRI is to provide robots with the capacity of detecting the emotions of the people around them to plan a trajectory, modify their behaviour, and generate an appropriate interaction with people based on the analysed information. However, in social environments in which it is common to find a group of persons, new approaches are needed in order to make robots able to recognise groups of people and the emotion of the groups, which can be also associated with a scene in which the group is participating. Some existing studies are focused on detecting group cohesion and the recognition of group emotions; nevertheless, these works do not focus on performing the recognition tasks from a robocentric perspective, considering the sensory capacity of robots. In this context, a system to recognise scenes in terms of groups of people, to then detect global (prevailing) emotions in a scene, is presented. The approach proposed to visualise and recognise emotions in typical HRI is based on the face size of people recognised by the robot during its navigation (face sizes decrease when the robot moves away from a group of people). On each frame of the video stream of the visual sensor, individual emotions are recognised based on the Visual Geometry Group (VGG) neural network pre-trained to recognise faces (VGGFace); then, to detect the emotion of the frame, individual emotions are aggregated with a fusion method, and consequently, to detect global (prevalent) emotion in the scene (group of people), the emotions of its constituent frames are also aggregated. Additionally, this work proposes a strategy to create datasets with images/videos in order to validate the estimation of emotions in scenes and personal emotions. Both datasets are generated in a simulated environment based on the Robot Operating System (ROS) from videos captured by robots through their sensory capabilities. Tests are performed in two simulated environments in ROS/Gazebo: a museum and a cafeteria. Results show that the accuracy in the detection of individual emotions is 99.79% and the detection of group emotion (scene emotion) in each frame is 90.84% and 89.78% in the cafeteria and the museum scenarios, respectively.
Collapse
|
14
|
Hsieh TY, Cross ES. People's dispositional cooperative tendencies towards robots are unaffected by robots' negative emotional displays in prisoner's dilemma games. Cogn Emot 2022; 36:995-1019. [PMID: 35389323 DOI: 10.1080/02699931.2022.2054781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
The study explores the impact of robots' emotional displays on people's tendency to cooperate with a robot opponent in prisoner's dilemma games. Participants played iterated prisoner's dilemma games with a non-expressive robot (as a measure of cooperative baseline), followed by an angry, and a sad robot, in turn. Based on the Emotion as Social Information model, we expected participants with higher cooperative predispositions to cooperate less when a robot displayed anger, and cooperate more when the robot displayed sadness. Contrarily, according to this model, participants with lower cooperative predispositions should cooperate more with an angry robot and less with a sad robot. The results of 60 participants failed to support the predictions. Only the participants' cooperative predispositions significantly predicted their cooperative tendencies during gameplay. Participants who cooperated more in the baseline measure also cooperated more with the robots displaying sadness and anger. In exploratory analyses, we found that participants who accurately recognised the robots' sad and angry displays tended to cooperate less with them overall. The study highlights the impact of personal factors in human-robot cooperation, and how these factors might surpass the influence of bottom-up emotional displays by the robots in the present experimental scenario.
Collapse
Affiliation(s)
- Te-Yi Hsieh
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland.,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|