1
|
Karami V, Yaffe MJ, Gore G, Moon AJ, Abbasgholizadeh Rahimi S. Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review. Arch Gerontol Geriatr 2024; 123:105409. [PMID: 38565072 DOI: 10.1016/j.archger.2024.105409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/29/2024] [Accepted: 03/10/2024] [Indexed: 04/04/2024]
Abstract
BACKGROUND The most common form of dementia, Alzheimer's Disease (AD), is challenging for both those affected as well as for their care providers, and caregivers. Socially assistive robots (SARs) offer promising supportive care to assist in the complex management associated with AD. OBJECTIVES To conduct a scoping review of published articles that proposed, discussed, developed or tested SAR for interacting with AD patients. METHODS We performed a scoping review informed by the methodological framework of Arksey and O'Malley and adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist for reporting the results. At the identification stage, an information specialist performed a comprehensive search of 8 electronic databases from the date of inception until January 2022 in eight bibliographic databases. The inclusion criteria were all populations who recive or provide care for AD, all interventions using SAR for AD and our outcomes of inteerst were any outcome related to AD patients or care providers or caregivers. All study types published in the English language were included. RESULTS After deduplication, 1251 articles were screened. Titles and abstracts screening resulted to 252 articles. Full-text review retained 125 included articles, with 72 focusing on daily life support, 46 on cognitive therapy, and 7 on cognitive assessment. CONCLUSION We conducted a comprehensive scoping review emphasizing on the interaction of SAR with AD patients, with a specific focus on daily life support, cognitive assessment, and cognitive therapy. We discussed our findings' pertinence relative to specific populations, interventions, and outcomes of human-SAR interaction on users and identified current knowledge gaps in SARs for AD patients.
Collapse
Affiliation(s)
- Vania Karami
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila - Quebec AI Institute, Montreal, Canada
| | - Mark J Yaffe
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada; St. Mary's Hospital Center, Montreal, Canada
| | - Genevieve Gore
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Montreal, Canada
| | - AJung Moon
- Department of Electrical & Computer Engineering, Faculty of Engineering, McGill University, Montreal, Canada
| | - Samira Abbasgholizadeh Rahimi
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila - Quebec AI Institute, Montreal, Canada; Faculty of Dental Medicine and Oral Health Sciences.
| |
Collapse
|
2
|
Fischer-Janzen A, Wendt TM, Van Laerhoven K. A scoping review of gaze and eye tracking-based control methods for assistive robotic arms. Front Robot AI 2024; 11:1326670. [PMID: 38440775 PMCID: PMC10909843 DOI: 10.3389/frobt.2024.1326670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 01/29/2024] [Indexed: 03/06/2024] Open
Abstract
Background: Assistive Robotic Arms are designed to assist physically disabled people with daily activities. Existing joysticks and head controls are not applicable for severely disabled people such as people with Locked-in Syndrome. Therefore, eye tracking control is part of ongoing research. The related literature spans many disciplines, creating a heterogeneous field that makes it difficult to gain an overview. Objectives: This work focuses on ARAs that are controlled by gaze and eye movements. By answering the research questions, this paper provides details on the design of the systems, a comparison of input modalities, methods for measuring the performance of these controls, and an outlook on research areas that gained interest in recent years. Methods: This review was conducted as outlined in the PRISMA 2020 Statement. After identifying a wide range of approaches in use the authors decided to use the PRISMA-ScR extension for a scoping review to present the results. The identification process was carried out by screening three databases. After the screening process, a snowball search was conducted. Results: 39 articles and 6 reviews were included in this article. Characteristics related to the system and study design were extracted and presented divided into three groups based on the use of eye tracking. Conclusion: This paper aims to provide an overview for researchers new to the field by offering insight into eye tracking based robot controllers. We have identified open questions that need to be answered in order to provide people with severe motor function loss with systems that are highly useable and accessible.
Collapse
Affiliation(s)
- Anke Fischer-Janzen
- Faculty Economy, Work-Life Robotics Institute, University of Applied Sciences Offenburg, Offenburg, Germany
| | - Thomas M. Wendt
- Faculty Economy, Work-Life Robotics Institute, University of Applied Sciences Offenburg, Offenburg, Germany
| | - Kristof Van Laerhoven
- Ubiquitous Computing, Department of Electrical Engineering and Computer Science, University of Siegen, Siegen, Germany
| |
Collapse
|
3
|
Fiorini L, D'Onofrio G, Sorrentino A, Cornacchia Loizzo FG, Russo S, Ciccone F, Giuliani F, Sancarlo D, Cavallo F. The Role of Coherent Robot Behavior and Embodiment in Emotion Perception and Recognition During Human-Robot Interaction: Experimental Study. JMIR Hum Factors 2024; 11:e45494. [PMID: 38277201 PMCID: PMC10858416 DOI: 10.2196/45494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/24/2023] [Accepted: 11/29/2023] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND Social robots are becoming increasingly important as companions in our daily lives. Consequently, humans expect to interact with them using the same mental models applied to human-human interactions, including the use of cospeech gestures. Research efforts have been devoted to understanding users' needs and developing robot's behavioral models that can perceive the user state and properly plan a reaction. Despite the efforts made, some challenges regarding the effect of robot embodiment and behavior in the perception of emotions remain open. OBJECTIVE The aim of this study is dual. First, it aims to assess the role of the robot's cospeech gestures and embodiment in the user's perceived emotions in terms of valence (stimulus pleasantness), arousal (intensity of evoked emotion), and dominance (degree of control exerted by the stimulus). Second, it aims to evaluate the robot's accuracy in identifying positive, negative, and neutral emotions displayed by interacting humans using 3 supervised machine learning algorithms: support vector machine, random forest, and K-nearest neighbor. METHODS Pepper robot was used to elicit the 3 emotions in humans using a set of 60 images retrieved from a standardized database. In particular, 2 experimental conditions for emotion elicitation were performed with Pepper robot: with a static behavior or with a robot that expresses coherent (COH) cospeech behavior. Furthermore, to evaluate the role of the robot embodiment, the third elicitation was performed by asking the participant to interact with a PC, where a graphical interface showed the same images. Each participant was requested to undergo only 1 of the 3 experimental conditions. RESULTS A total of 60 participants were recruited for this study, 20 for each experimental condition for a total of 3600 interactions. The results showed significant differences (P<.05) in valence, arousal, and dominance when stimulated with the Pepper robot behaving COH with respect to the PC condition, thus underlying the importance of the robot's nonverbal communication and embodiment. A higher valence score was obtained for the elicitation of the robot (COH and robot with static behavior) with respect to the PC. For emotion recognition, the K-nearest neighbor classifiers achieved the best accuracy results. In particular, the COH modality achieved the highest level of accuracy (0.97) when compared with the static behavior and PC elicitations (0.88 and 0.94, respectively). CONCLUSIONS The results suggest that the use of multimodal communication channels, such as cospeech and visual channels, as in the COH modality, may improve the recognition accuracy of the user's emotional state and can reinforce the perceived emotion. Future studies should investigate the effect of age, culture, and cognitive profile on the emotion perception and recognition going beyond the limitation of this work.
Collapse
Affiliation(s)
- Laura Fiorini
- Department of Industrial Engineering, University of Florence, Firenze, Italy
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera (Pisa), Italy
| | - Grazia D'Onofrio
- Clinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | | | | | - Sergio Russo
- Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Filomena Ciccone
- Clinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Francesco Giuliani
- Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Daniele Sancarlo
- Complex Unit of Geriatrics, Department of Medical Sciences, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo (Foggia), Italy
| | - Filippo Cavallo
- Department of Industrial Engineering, University of Florence, Firenze, Italy
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera (Pisa), Italy
| |
Collapse
|
4
|
Szorkovszky A, Veenstra F, Glette K. Central pattern generators evolved for real-time adaptation to rhythmic stimuli. BIOINSPIRATION & BIOMIMETICS 2023; 18:046020. [PMID: 37339660 DOI: 10.1088/1748-3190/ace017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 06/20/2023] [Indexed: 06/22/2023]
Abstract
For a robot to be both autonomous and collaborative requires the ability to adapt its movement to a variety of external stimuli, whether these come from humans or other robots. Typically, legged robots have oscillation periods explicitly defined as a control parameter, limiting the adaptability of walking gaits. Here we demonstrate a virtual quadruped robot employing a bio-inspired central pattern generator (CPG) that can spontaneously synchronize its movement to a range of rhythmic stimuli. Multi-objective evolutionary algorithms were used to optimize the variation of movement speed and direction as a function of the brain stem drive and the centre of mass control respectively. This was followed by optimization of an additional layer of neurons that filters fluctuating inputs. As a result, a range of CPGs were able to adjust their gait pattern and/or frequency to match the input period. We show how this can be used to facilitate coordinated movement despite differences in morphology, as well as to learn new movement patterns.
Collapse
Affiliation(s)
- Alex Szorkovszky
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Frank Veenstra
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Kyrre Glette
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| |
Collapse
|
5
|
Fu D, Abawi F, Carneiro H, Kerzel M, Chen Z, Strahl E, Liu X, Wermter S. A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution. Int J Soc Robot 2023; 15:1-16. [PMID: 37359433 PMCID: PMC10067521 DOI: 10.1007/s12369-023-00993-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2023] [Indexed: 04/05/2023]
Abstract
To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar's dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans.
Collapse
Affiliation(s)
- Di Fu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Fares Abawi
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Hugo Carneiro
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Matthias Kerzel
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Ziwei Chen
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Erik Strahl
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Xun Liu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Stefan Wermter
- Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
6
|
Maroto-Gómez M, Alonso-Martín F, Malfaz M, Castro-González Á, Castillo JC, Salichs MÁ. A Systematic Literature Review of Decision-Making and Control Systems for Autonomous and Social Robots. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00977-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
AbstractIn the last years, considerable research has been carried out to develop robots that can improve our quality of life during tedious and challenging tasks. In these contexts, robots operating without human supervision open many possibilities to assist people in their daily activities. When autonomous robots collaborate with humans, social skills are necessary for adequate communication and cooperation. Considering these facts, endowing autonomous and social robots with decision-making and control models is critical for appropriately fulfiling their initial goals. This manuscript presents a systematic review of the evolution of decision-making systems and control architectures for autonomous and social robots in the last three decades. These architectures have been incorporating new methods based on biologically inspired models and Machine Learning to enhance these systems’ possibilities to developed societies. The review explores the most novel advances in each application area, comparing their most essential features. Additionally, we describe the current challenges of software architecture devoted to action selection, an analysis not provided in similar reviews of behavioural models for autonomous and social robots. Finally, we present the future directions that these systems can take in the future.
Collapse
|
7
|
Human-behaviour-based social locomotion model improves the humanization of social robots. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00542-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
8
|
A Novel Approach to Systematic Development of Social Robot Product Families. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00906-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
9
|
Sorrentino A, Fiorini L, Mancioppi G, Cavallo F, Umbrico A, Cesta A, Orlandini A. Personalizing Care Through Robotic Assistance and Clinical Supervision. Front Robot AI 2022; 9:883814. [PMID: 35903720 PMCID: PMC9315221 DOI: 10.3389/frobt.2022.883814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/22/2022] [Indexed: 11/25/2022] Open
Abstract
By 2030, the World Health Organization (WHO) foresees a worldwide workforce shortfall of healthcare professionals, with dramatic consequences for patients, economies, and communities. Research in assistive robotics has experienced an increasing attention during the last decade demonstrating its utility in the realization of intelligent robotic solutions for healthcare and social assistance, also to compensate for such workforce shortages. Nevertheless, a challenge for effective assistive robots is dealing with a high variety of situations and contextualizing their interactions according to living contexts and habits (or preferences) of assisted people. This study presents a novel cognitive system for assistive robots that rely on artificial intelligence (AI) representation and reasoning features/services to support decision-making processes of healthcare assistants. We proposed an original integration of AI-based features, that is, knowledge representation and reasoning and automated planning to 1) define a human-in-the-loop continuous assistance procedure that helps clinicians in evaluating and managing patients and; 2) to dynamically adapt robot behaviors to the specific needs and interaction abilities of patients. The system is deployed in a realistic assistive scenario to demonstrate its feasibility to support a clinician taking care of several patients with different conditions and needs.
Collapse
Affiliation(s)
| | - Laura Fiorini
- Department of Industrial Engineering, University of Florence, Florence, Italy
| | | | - Filippo Cavallo
- Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Industrial Engineering, University of Florence, Florence, Italy
| | - Alessandro Umbrico
- CNR–Institute of Cognitive Sciences and Technologies (CNR-ISTC), Rome, Italy
- *Correspondence: Alessandro Umbrico,
| | - Amedeo Cesta
- CNR–Institute of Cognitive Sciences and Technologies (CNR-ISTC), Rome, Italy
| | - Andrea Orlandini
- CNR–Institute of Cognitive Sciences and Technologies (CNR-ISTC), Rome, Italy
| |
Collapse
|
10
|
Schneider J, Abraham R, Meske C, Vom Brocke J. Artificial Intelligence Governance For Businesses. INFORMATION SYSTEMS MANAGEMENT 2022. [DOI: 10.1080/10580530.2022.2085825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Johannes Schneider
- Institute of Information Systems, University of Liechtenstein, Vaduz, Liechtenstein
| | - Rene Abraham
- Institute of Information Systems, University of Liechtenstein, Vaduz, Liechtenstein
| | | | - Jan Vom Brocke
- Institute of Information Systems, University of Liechtenstein, Vaduz, Liechtenstein
| |
Collapse
|
11
|
Telepresence Social Robotics towards Co-Presence: A Review. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Telepresence robots are becoming popular in social interactions involving health care, elderly assistance, guidance, or office meetings. There are two types of human psychological experiences to consider in robot-mediated interactions: (1) telepresence, in which a user develops a sense of being present near the remote interlocutor, and (2) co-presence, in which a user perceives the other person as being present locally with him or her. This work presents a literature review on developments supporting robotic social interactions, contributing to improving the sense of presence and co-presence via robot mediation. This survey aims to define social presence, co-presence, identify autonomous “user-adaptive systems” for social robots, and propose a taxonomy for “co-presence” mechanisms. It presents an overview of social robotics systems, applications areas, and technical methods and provides directions for telepresence and co-presence robot design given the actual and future challenges. Finally, we suggest evaluation guidelines for these systems, having as reference face-to-face interaction.
Collapse
|
12
|
Corallo F, Maresca G, Formica C, Bonanno L, Bramanti A, Parasporo N, Giambò FM, De Cola MC, Lo Buono V. Humanoid Robot Use in Cognitive Rehabilitation of Patients with Severe Brain Injury: A Pilot Study. J Clin Med 2022; 11:jcm11102940. [PMID: 35629068 PMCID: PMC9146630 DOI: 10.3390/jcm11102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 05/18/2022] [Accepted: 05/21/2022] [Indexed: 02/06/2023] Open
Abstract
Severe acquired brain injury (SABI) is a major global public health problem and a source of disability. A major contributor to disability after SABI is limited access to multidisciplinary rehabilitation, despite evidence of sustained functional gains, improved quality of life, increased return to work, and reduced need for long-term care. Twelve patients with a diagnosis of SABI were enrolled and equally divided into two groups: experimental and control. Patients in both groups underwent intensive neurorehabilitation according to the severity of their disabilities (motor, psycho-cognitive, and sensory deficits). However, in the experimental group, the treatment was performed by using a humanoid robot. At baseline, the two groups differed significantly only in Severe Impairment Battery (SIB) scores. Results showed that the experimental treatment had a higher effect than the traditional one on quality of life and mood. In conclusion, this pilot study provides evidence of the possible effects of relational and cognitive stimulation in more severely brain-injured patients.
Collapse
Affiliation(s)
- Francesco Corallo
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
| | - Giuseppa Maresca
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
| | - Caterina Formica
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
| | - Lilla Bonanno
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
| | - Alessia Bramanti
- Department of Medicine, Surgery and Dentistry—Medical School of Salerno, University of Salerno, 84084 Fisciano, Italy;
| | - Nicholas Parasporo
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
| | - Fabio Mauro Giambò
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
| | - Maria Cristina De Cola
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
- Correspondence:
| | - Viviana Lo Buono
- IRCCS Centro Neurolesi Bonino-Pulejo, 98124 Messina, Italy; (F.C.); (G.M.); (C.F.); (L.B.); (N.P.); (F.M.G.); (V.L.B.)
| |
Collapse
|
13
|
Quiroz M, Patiño R, Diaz-Amado J, Cardinale Y. Group Emotion Detection Based on Social Robot Perception. SENSORS (BASEL, SWITZERLAND) 2022; 22:3749. [PMID: 35632160 PMCID: PMC9145339 DOI: 10.3390/s22103749] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/03/2022] [Accepted: 05/05/2022] [Indexed: 12/16/2022]
Abstract
Social robotics is an emerging area that is becoming present in social spaces, by introducing autonomous social robots. Social robots offer services, perform tasks, and interact with people in such social environments, demanding more efficient and complex Human-Robot Interaction (HRI) designs. A strategy to improve HRI is to provide robots with the capacity of detecting the emotions of the people around them to plan a trajectory, modify their behaviour, and generate an appropriate interaction with people based on the analysed information. However, in social environments in which it is common to find a group of persons, new approaches are needed in order to make robots able to recognise groups of people and the emotion of the groups, which can be also associated with a scene in which the group is participating. Some existing studies are focused on detecting group cohesion and the recognition of group emotions; nevertheless, these works do not focus on performing the recognition tasks from a robocentric perspective, considering the sensory capacity of robots. In this context, a system to recognise scenes in terms of groups of people, to then detect global (prevailing) emotions in a scene, is presented. The approach proposed to visualise and recognise emotions in typical HRI is based on the face size of people recognised by the robot during its navigation (face sizes decrease when the robot moves away from a group of people). On each frame of the video stream of the visual sensor, individual emotions are recognised based on the Visual Geometry Group (VGG) neural network pre-trained to recognise faces (VGGFace); then, to detect the emotion of the frame, individual emotions are aggregated with a fusion method, and consequently, to detect global (prevalent) emotion in the scene (group of people), the emotions of its constituent frames are also aggregated. Additionally, this work proposes a strategy to create datasets with images/videos in order to validate the estimation of emotions in scenes and personal emotions. Both datasets are generated in a simulated environment based on the Robot Operating System (ROS) from videos captured by robots through their sensory capabilities. Tests are performed in two simulated environments in ROS/Gazebo: a museum and a cafeteria. Results show that the accuracy in the detection of individual emotions is 99.79% and the detection of group emotion (scene emotion) in each frame is 90.84% and 89.78% in the cafeteria and the museum scenarios, respectively.
Collapse
Affiliation(s)
- Marco Quiroz
- Electrical and Electronics Engineering Department, School of Electronics and Telecommunications Engineering, Universidad Católica San Pablo, Arequipa 04001, Peru; (M.Q.); (R.P.); (J.D.-A.)
| | - Raquel Patiño
- Electrical and Electronics Engineering Department, School of Electronics and Telecommunications Engineering, Universidad Católica San Pablo, Arequipa 04001, Peru; (M.Q.); (R.P.); (J.D.-A.)
| | - José Diaz-Amado
- Electrical and Electronics Engineering Department, School of Electronics and Telecommunications Engineering, Universidad Católica San Pablo, Arequipa 04001, Peru; (M.Q.); (R.P.); (J.D.-A.)
- Instituto Federal da Bahia, Vitoria da Conquista 45078-300, Brazil
| | - Yudith Cardinale
- Electrical and Electronics Engineering Department, School of Electronics and Telecommunications Engineering, Universidad Católica San Pablo, Arequipa 04001, Peru; (M.Q.); (R.P.); (J.D.-A.)
- Higher School of Engineering, Science and Technology, Universidad Internacional de Valencia, 46002 Valencia, Spain
| |
Collapse
|
14
|
Continuous Emotion Recognition for Long-Term Behavior Modeling through Recurrent Neural Networks. TECHNOLOGIES 2022. [DOI: 10.3390/technologies10030059] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
One’s internal state is mainly communicated through nonverbal cues, such as facial expressions, gestures and tone of voice, which in turn shape the corresponding emotional state. Hence, emotions can be effectively used, in the long term, to form an opinion of an individual’s overall personality. The latter can be capitalized on in many human–robot interaction (HRI) scenarios, such as in the case of an assisted-living robotic platform, where a human’s mood may entail the adaptation of a robot’s actions. To that end, we introduce a novel approach that gradually maps and learns the personality of a human, by conceiving and tracking the individual’s emotional variations throughout their interaction. The proposed system extracts the facial landmarks of the subject, which are used to train a suitably designed deep recurrent neural network architecture. The above architecture is responsible for estimating the two continuous coefficients of emotion, i.e., arousal and valence, following the broadly known Russell’s model. Finally, a user-friendly dashboard is created, presenting both the momentary and the long-term fluctuations of a subject’s emotional state. Therefore, we propose a handy tool for HRI scenarios, where robot’s activity adaptation is needed for enhanced interaction performance and safety.
Collapse
|
15
|
Schneider J. Optimizing human hand gestures for AI-systems. AI COMMUN 2022. [DOI: 10.3233/aic-210081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Humans interact more and more with systems containing AI components. In this work, we focus on hand gestures such as handwriting and sketches serving as inputs to such systems. They are represented as a trajectory, i.e. sequence of points, that is altered to improve interaction with an AI model while keeping the model fixed. Optimized inputs are accompanied by instructions on how to create them. We aim to cut on effort for humans and recognition errors while limiting changes to original inputs. We derive multiple objectives and measures and propose continuous and discrete optimization methods embracing the AI model to improve samples in an iterative fashion by removing, shifting and reordering points of the gesture trajectory. Our quantitative and qualitative evaluation shows that mimicking generated proposals that differ only modestly from the original ones leads to lower error rates and requires less effort. Furthermore, our work can be easily adjusted for sketch abstraction improving on prior work.
Collapse
|
16
|
D’Onofrio G, Fiorini L, Sorrentino A, Russo S, Ciccone F, Giuliani F, Sancarlo D, Cavallo F. Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project). SENSORS 2022; 22:s22082861. [PMID: 35458845 PMCID: PMC9031388 DOI: 10.3390/s22082861] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 04/03/2022] [Accepted: 04/07/2022] [Indexed: 01/10/2023]
Abstract
Background: Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in real time. The study aims were to identify if traditional machine learning algorithms could be used to assess every user emotions separately, to relate emotion recognizing in two robotic modalities: static or motion robot, and to evaluate the acceptability and usability of assistive robot from an end-user point of view. Methods: Twenty-seven hospital employees (M = 12; F = 15) were recruited to perform the experiment showing 60 positive, negative, or neutral images selected in the International Affective Picture System (IAPS) database. The experiment was performed with the Pepper robot. Concerning experimental phase with Pepper in active mode, a concordant mimicry was programmed based on types of images (positive, negative, and neutral). During the experimentation, the images were shown by a tablet on robot chest and a web interface lasting 7 s for each slide. For each image, the participants were asked to perform a subjective assessment of the perceived emotional experience using the Self-Assessment Manikin (SAM). After participants used robotic solution, Almere model questionnaire (AMQ) and system usability scale (SUS) were administered to assess acceptability, usability, and functionality of robotic solution. Analysis wasperformed on video recordings. The evaluation of three types of attitude (positive, negative, andneutral) wasperformed through two classification algorithms of machine learning: k-nearest neighbors (KNN) and random forest (RF). Results: According to the analysis of emotions performed on the recorded videos, RF algorithm performance wasbetter in terms of accuracy (mean ± sd = 0.98 ± 0.01) and execution time (mean ± sd = 5.73 ± 0.86 s) than KNN algorithm. By RF algorithm, all neutral, positive and negative attitudes had an equal and high precision (mean = 0.98) and F-measure (mean = 0.98). Most of the participants confirmed a high level of usability and acceptability of the robotic solution. Conclusions: RF algorithm performance was better in terms of accuracy and execution time than KNN algorithm. The robot was not a disturbing factor in the arousal of emotions.
Collapse
Affiliation(s)
- Grazia D’Onofrio
- Clinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, Italy;
- Correspondence: ; Tel./Fax: +39-0882-410271
| | - Laura Fiorini
- Department of Industrial Engineering, University of Florence, 50121 Florence, Italy; (L.F.); (A.S.); (F.C.)
| | - Alessandra Sorrentino
- Department of Industrial Engineering, University of Florence, 50121 Florence, Italy; (L.F.); (A.S.); (F.C.)
| | - Sergio Russo
- Information and Communication Technology, Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, Italy; (S.R.); (F.G.)
| | - Filomena Ciccone
- Clinical Psychology Service, Health Department, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, Italy;
| | - Francesco Giuliani
- Information and Communication Technology, Innovation & Research Unit, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, Italy; (S.R.); (F.G.)
| | - Daniele Sancarlo
- Complex Unit of Geriatrics, Department of Medical Sciences, Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, 71013 Foggia, Italy;
| | - Filippo Cavallo
- Department of Industrial Engineering, University of Florence, 50121 Florence, Italy; (L.F.); (A.S.); (F.C.)
| |
Collapse
|
17
|
Robinson F, Nejat G. An analysis of design recommendations for socially assistive robot helpers for effective human-robot interactions in senior care. J Rehabil Assist Technol Eng 2022; 9:20556683221101389. [PMID: 35733614 PMCID: PMC9208044 DOI: 10.1177/20556683221101389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Accepted: 04/26/2022] [Indexed: 11/15/2022] Open
Abstract
As the global population ages, there is an increase in demand for assistive technologies that can alleviate the stresses on healthcare systems. The growing field of socially assistive robotics (SARs) offers unique solutions that are interactive, engaging, and adaptable to different users’ needs. Crucial to having positive human-robot interaction (HRI) experiences in senior care settings is the overall design of the robot, considering the unique challenges and opportunities that come with novice users. This paper presents a novel study that explores the effect of SAR design on HRI in senior care through a results-oriented analysis of the literature. We provide key design recommendations to ensure inclusion for a diverse set of users. Open challenges of considering user preferences during design, creating adaptive behaviors, and developing intelligent autonomy are discussed in detail. SAR features of appearance and interaction mode along with SAR frameworks for perception and intelligence are explored to evaluate individual developments using metrics such as trust, acceptance, and intent to use. Drawing from a diverse set of features, SAR frameworks, and HRI studies, the discussion highlights robot characteristics of greatest influence in promoting wellbeing and aging-in-place of older adults and generates design recommendations that are important for future development.
Collapse
Affiliation(s)
- Fraser Robinson
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
| | - Goldie Nejat
- Autonomous Systems and Biomechatronics Laboratory (ASBLab), Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, Toronto, ON, Canada
- Baycrest Health Sciences, Rotman Research Institute, Toronto, ON, Canada
| |
Collapse
|
18
|
Gyrard A, Tabeau K, Fiorini L, Kung A, Senges E, De Mul M, Giuliani F, Lefebvre D, Hoshino H, Fabbricotti I, Sancarlo D, D’Onofrio G, Cavallo F, Guiot D, Arzoz-Fernandez E, Okabe Y, Tsukamoto M. Knowledge Engineering Framework for IoT Robotics Applied to Smart Healthcare and Emotional Well-Being. Int J Soc Robot 2021; 15:445-472. [PMID: 34804257 PMCID: PMC8594653 DOI: 10.1007/s12369-021-00821-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/22/2021] [Indexed: 12/01/2022]
Abstract
Social companion robots are getting more attention to assist elderly people to stay independent at home and to decrease their social isolation. When developing solutions, one remaining challenge is to design the right applications that are usable by elderly people. For this purpose, co-creation methodologies involving multiple stakeholders and a multidisciplinary researcher team (e.g., elderly people, medical professionals, and computer scientists such as roboticists or IoT engineers) are designed within the ACCRA (Agile Co-Creation of Robots for Ageing) project. This paper will address this research question: How can Internet of Robotic Things (IoRT) technology and co-creation methodologies help to design emotional-based robotic applications? This is supported by the ACCRA project that develops advanced social robots to support active and healthy ageing, co-created by various stakeholders such as ageing people and physicians. We demonstra this with three robots, Buddy, ASTRO, and RoboHon, used for daily life, mobility, and conversation. The three robots understand and convey emotions in real-time using the Internet of Things and Artificial Intelligence technologies (e.g., knowledge-based reasoning).
Collapse
Affiliation(s)
| | - Kasia Tabeau
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Laura Fiorini
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | - Eloise Senges
- Trialog, Paris, France
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- BlueFrog Robotics, Paris, France
- Université Paris-Dauphine, Paris, France
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
- Kyoto University, Kyoto, Japan
- Kobe University, Kobe, Japan
| | - Marleen De Mul
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Francesco Giuliani
- Trialog, Paris, France
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- BlueFrog Robotics, Paris, France
- Université Paris-Dauphine, Paris, France
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
- Kyoto University, Kyoto, Japan
- Kobe University, Kobe, Japan
| | | | - Hiroshi Hoshino
- Trialog, Paris, France
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- BlueFrog Robotics, Paris, France
- Université Paris-Dauphine, Paris, France
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
- Kyoto University, Kyoto, Japan
- Kobe University, Kobe, Japan
| | - Isabelle Fabbricotti
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Daniele Sancarlo
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
| | - Grazia D’Onofrio
- Fondazione IRCCS Casa Sollievo della Sofferenza, San Giovanni Rotondo, Italy
| | - Filippo Cavallo
- Department of Industrial Engineering, University of Florence, Florence, Italy
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | | | | | | |
Collapse
|
19
|
Abstract
The 2021 sales volume in the market of service robots is attractive. Expert reports from the International Federation of Robotics confirm 27 billion USD in total market share. Moreover, the number of new startups with the denomination of service robots nowadays constitutes 29% of the total amount of robotic companies recorded in the United States. Those data, among other similar figures, remark the need for formal development in the service robots area, including knowledge transfer and literature reviews. Furthermore, the COVID-19 spread accelerated business units and some research groups to invest time and effort into the field of service robotics. Therefore, this research work intends to contribute to the formalization of service robots as an area of robotics, presenting a systematic review of scientific literature. First, a definition of service robots according to fundamental ontology is provided, followed by a detailed review covering technological applications; state-of-the-art, commercial technology; and application cases indexed on the consulted databases.
Collapse
|
20
|
Hani Daniel Zakaria M, Lengagne S, Corrales Ramón JA, Mezouar Y. General Framework for the Optimization of the Human-Robot Collaboration Decision-Making Process Through the Ability to Change Performance Metrics. Front Robot AI 2021; 8:736644. [PMID: 34760932 PMCID: PMC8573032 DOI: 10.3389/frobt.2021.736644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 09/28/2021] [Indexed: 11/16/2022] Open
Abstract
This paper proposes a new decision-making framework in the context of Human-Robot Collaboration (HRC). State-of-the-art techniques consider the HRC as an optimization problem in which the utility function, also called reward function, is defined to accomplish the task regardless of how well the interaction is performed. When the performance metrics are considered, they cannot be easily changed within the same framework. In contrast, our decision-making framework can easily handle the change of the performance metrics from one case scenario to another. Our method treats HRC as a constrained optimization problem where the utility function is split into two main parts. Firstly, a constraint defines how to accomplish the task. Secondly, a reward evaluates the performance of the collaboration, which is the only part that is modified when changing the performance metrics. It gives control over the way the interaction unfolds, and it also guarantees the adaptation of the robot actions to the human ones in real-time. In this paper, the decision-making process is based on Nash Equilibrium and perfect-information extensive form from game theory. It can deal with collaborative interactions considering different performance metrics such as optimizing the time to complete the task, considering the probability of human errors, etc. Simulations and a real experimental study on “an assembly task” -i.e., a game based on a construction kit-illustrate the effectiveness of the proposed framework.
Collapse
Affiliation(s)
| | - Sébastien Lengagne
- CNRS, Clermont Auvergne INP, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Juan Antonio Corrales Ramón
- Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Youcef Mezouar
- CNRS, Clermont Auvergne INP, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
21
|
Dazeley R, Vamplew P, Foale C, Young C, Aryal S, Cruz F. Levels of explainable artificial intelligence for human-aligned conversational explanations. ARTIF INTELL 2021. [DOI: 10.1016/j.artint.2021.103525] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Shourmasti ES, Colomo-Palacios R, Holone H, Demi S. User Experience in Social Robots. SENSORS (BASEL, SWITZERLAND) 2021; 21:5052. [PMID: 34372289 PMCID: PMC8348916 DOI: 10.3390/s21155052] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/20/2021] [Accepted: 07/23/2021] [Indexed: 11/16/2022]
Abstract
Social robots are increasingly penetrating our daily lives. They are used in various domains, such as healthcare, education, business, industry, and culture. However, introducing this technology for use in conventional environments is not trivial. For users to accept social robots, a positive user experience is vital, and it should be considered as a critical part of the robots' development process. This may potentially lead to excessive use of social robots and strengthen their diffusion in society. The goal of this study is to summarize the extant literature that is focused on user experience in social robots, and to identify the challenges and benefits of UX evaluation in social robots. To achieve this goal, the authors carried out a systematic literature review that relies on PRISMA guidelines. Our findings revealed that the most common methods to evaluate UX in social robots are questionnaires and interviews. UX evaluations were found out to be beneficial in providing early feedback and consequently in handling errors at an early stage. However, despite the importance of UX in social robots, robot developers often neglect to set UX goals due to lack of knowledge or lack of time. This study emphasizes the need for robot developers to acquire the required theoretical and practical knowledge on how to perform a successful UX evaluation.
Collapse
Affiliation(s)
| | - Ricardo Colomo-Palacios
- Department of Computer Science, Østfold University College, 1783 Halden, Norway; (E.S.S.); (H.H.); (S.D.)
| | | | | |
Collapse
|
23
|
Constructing an Emotion Estimation Model Based on EEG/HRV Indexes Using Feature Extraction and Feature Selection Algorithms. SENSORS 2021; 21:s21092910. [PMID: 33919251 PMCID: PMC8122245 DOI: 10.3390/s21092910] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 04/17/2021] [Accepted: 04/19/2021] [Indexed: 01/14/2023]
Abstract
In human emotion estimation using an electroencephalogram (EEG) and heart rate variability (HRV), there are two main issues as far as we know. The first is that measurement devices for physiological signals are expensive and not easy to wear. The second is that unnecessary physiological indexes have not been removed, which is likely to decrease the accuracy of machine learning models. In this study, we used single-channel EEG sensor and photoplethysmography (PPG) sensor, which are inexpensive and easy to wear. We collected data from 25 participants (18 males and 7 females) and used a deep learning algorithm to construct an emotion classification model based on Arousal–Valence space using several feature combinations obtained from physiological indexes selected based on our criteria including our proposed feature selection methods. We then performed accuracy verification, applying a stratified 10-fold cross-validation method to the constructed models. The results showed that model accuracies are as high as 90% to 99% by applying the features selection methods we proposed, which suggests that a small number of physiological indexes, even from inexpensive sensors, can be used to construct an accurate emotion classification model if an appropriate feature selection method is applied. Our research results contribute to the improvement of an emotion classification model with a higher accuracy, less cost, and that is less time consuming, which has the potential to be further applied to various areas of applications.
Collapse
|
24
|
Feasibility Study on the Role of Personality, Emotion, and Engagement in Socially Assistive Robotics: A Cognitive Assessment Scenario. INFORMATICS 2021. [DOI: 10.3390/informatics8020023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This study aims to investigate the role of several aspects that may influence human–robot interaction in assistive scenarios. Among all, we focused on semi-permanent qualities (i.e., personality and cognitive state) and temporal traits (i.e., emotion and engagement) of the user profile. To this end, we organized an experimental session with 11 elderly users who performed a cognitive assessment with the non-humanoid ASTRO robot. ASTRO robot administered the Mini Mental State Examination test in Wizard of Oz setup. Temporal and long-term qualities of each user profile were assessed by self-report questionnaires and by behavioral features extrapolated by the recorded videos. Results highlighted that the quality of the interaction did not depend on the cognitive state of the participants. On the contrary, the cognitive assessment with the robot significantly reduced the anxiety of the users, by enhancing the trust in the robotic entity. It suggests that the personality and the affect traits of the interacting user have a fundamental influence on the quality of the interaction, also in the socially assistive context.
Collapse
|
25
|
Davies S, Lucas A, Ricolfe-Viala C, Di Nuovo A. A Database for Learning Numbers by Visual Finger Recognition in Developmental Neuro-Robotics. Front Neurorobot 2021; 15:619504. [PMID: 33737873 PMCID: PMC7960766 DOI: 10.3389/fnbot.2021.619504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
Numerical cognition is a fundamental component of human intelligence that has not been fully understood yet. Indeed, it is a subject of research in many disciplines, e.g., neuroscience, education, cognitive and developmental psychology, philosophy of mathematics, linguistics. In Artificial Intelligence, aspects of numerical cognition have been modelled through neural networks to replicate and analytically study children behaviours. However, artificial models need to incorporate realistic sensory-motor information from the body to fully mimic the children's learning behaviours, e.g., the use of fingers to learn and manipulate numbers. To this end, this article presents a database of images, focused on number representation with fingers using both human and robot hands, which can constitute the base for building new realistic models of numerical cognition in humanoid robots, enabling a grounded learning approach in developmental autonomous agents. The article provides a benchmark analysis of the datasets in the database that are used to train, validate, and test five state-of-the art deep neural networks, which are compared for classification accuracy together with an analysis of the computational requirements of each network. The discussion highlights the trade-off between speed and precision in the detection, which is required for realistic applications in robotics.
Collapse
Affiliation(s)
- Sergio Davies
- Department of Computing, Sheffield Hallam University, Sheffield, United Kingdom
| | - Alexandr Lucas
- Department of Computing, Sheffield Hallam University, Sheffield, United Kingdom.,Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
| | - Carlos Ricolfe-Viala
- Instituto de Automàtica e Informàtica Industrial, Universitat Politecnica de Valencia, Valencia, Spain
| | - Alessandro Di Nuovo
- Department of Computing, Sheffield Hallam University, Sheffield, United Kingdom
| |
Collapse
|
26
|
|
27
|
Belmonte LM, García AS, Morales R, de la Vara JL, López de la Rosa F, Fernández-Caballero A. Feeling of Safety and Comfort towards a Socially Assistive Unmanned Aerial Vehicle That Monitors People in a Virtual Home. SENSORS 2021; 21:s21030908. [PMID: 33572833 PMCID: PMC7866270 DOI: 10.3390/s21030908] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 01/21/2021] [Accepted: 01/23/2021] [Indexed: 12/18/2022]
Abstract
Unmanned aerial vehicles (UAVs) represent a new model of social robots for home care of dependent persons. In this regard, this article introduces a study on people’s feeling of safety and comfort while watching the monitoring trajectory of a quadrotor dedicated to determining their condition. Three main parameters are evaluated: the relative monitoring altitude, the monitoring velocity and the shape of the monitoring path around the person (ellipsoidal or circular). For this purpose, a new trajectory generator based on a state machine, which is successfully implemented and simulated in MATLAB/Simulink®, is described. The study is carried out with 37 participants using a virtual reality (VR) platform based on two modules, UAV simulator and VR Visualiser, both communicating through the MQTT protocol. The participants’ preferences have been a high relative monitoring altitude, a high monitoring velocity and a circular path. These choices are a starting point for the design of trustworthy socially assistive UAVs flying in real homes.
Collapse
Affiliation(s)
- Lidia M. Belmonte
- Departamento de Ingeniería Eléctrica, Electrónica, Automática y Comunicaciones, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (L.M.B.); (R.M.)
- Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (A.S.G.); (J.L.d.l.V.); (F.L.d.l.R.)
| | - Arturo S. García
- Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (A.S.G.); (J.L.d.l.V.); (F.L.d.l.R.)
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
| | - Rafael Morales
- Departamento de Ingeniería Eléctrica, Electrónica, Automática y Comunicaciones, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (L.M.B.); (R.M.)
- Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (A.S.G.); (J.L.d.l.V.); (F.L.d.l.R.)
| | - Jose Luis de la Vara
- Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (A.S.G.); (J.L.d.l.V.); (F.L.d.l.R.)
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
| | - Francisco López de la Rosa
- Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (A.S.G.); (J.L.d.l.V.); (F.L.d.l.R.)
| | - Antonio Fernández-Caballero
- Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain; (A.S.G.); (J.L.d.l.V.); (F.L.d.l.R.)
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), 28016 Madrid, Spain
- Correspondence: ; Tel.: +34-967599200
| |
Collapse
|
28
|
Nagabhushan P, Sonbhadra SK, Punn NS, Agarwal S. Towards Machine Learning to Machine Wisdom: A Potential Quest. BIG DATA ANALYTICS 2021. [DOI: 10.1007/978-3-030-93620-4_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
29
|
The AMIRO Social Robotics Framework: Deployment and Evaluation on the Pepper Robot. SENSORS 2020; 20:s20247271. [PMID: 33352943 PMCID: PMC7766942 DOI: 10.3390/s20247271] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 12/10/2020] [Accepted: 12/14/2020] [Indexed: 11/17/2022]
Abstract
Recent studies in social robotics show that it can provide economic efficiency and growth in domains such as retail, entertainment, and active and assisted living (AAL). Recent work also highlights that users have the expectation of affordable social robotics platforms, providing focused and specific assistance in a robust manner. In this paper, we present the AMIRO social robotics framework, designed in a modular and robust way for assistive care scenarios. The framework includes robotic services for navigation, person detection and recognition, multi-lingual natural language interaction and dialogue management, as well as activity recognition and general behavior composition. We present AMIRO platform independent implementation based on a Robot Operating System (ROS). We focus on quantitative evaluations of each functionality module, providing discussions on their performance in different settings and the possible improvements. We showcase the deployment of the AMIRO framework on a popular social robotics platform-the Pepper robot-and present the experience of developing a complex user interaction scenario, employing all available functionality modules within AMIRO.
Collapse
|
30
|
Abstract
BACKGROUND The ultimate goal of artificial intelligence (AI) is to develop technologies that are best able to serve humanity. This will require advancements that go beyond the basic components of general intelligence. The term "intelligence" does not best represent the technological needs of advancing society, because it is "wisdom", rather than intelligence, that is associated with greater well-being, happiness, health, and perhaps even longevity of the individual and the society. Thus, the future need in technology is for artificial wisdom (AW). METHODS We examine the constructs of human intelligence and human wisdom in terms of their basic components, neurobiology, and relationship to aging, based on published empirical literature. We review the development of AI as inspired and driven by the model of human intelligence, and consider possible governing principles for AW that would enable humans to develop computers which can operationally utilize wise principles and result in wise acts. We review relevant examples of current efforts to develop such wise technologies. RESULTS AW systems will be based on developmental models of the neurobiology of human wisdom. These AW systems need to be able to a) learn from experience and self-correct; b) exhibit compassionate, unbiased, and ethical behaviors; and c) discern human emotions and help the human users to regulate their emotions and make wise decisions. CONCLUSIONS A close collaboration among computer scientists, neuroscientists, mental health experts, and ethicists is necessary for developing AW technologies, which will emulate the qualities of wise humans and thus serve the greatest benefit to humanity. Just as human intelligence and AI have helped further the understanding and usefulness of each other, human wisdom and AW can aid in promoting each other's growth.
Collapse
Affiliation(s)
- Dilip V. Jeste
- Department of Psychiatry, University of California San Diego, La Jolla, CA, US
- Sam and Rose Stein Institute for Research on Aging, University of California La Jolla, San Diego, CA, US
- Department of Neurosciences, University of California La Jolla, San Diego, CA, US
| | - Sarah A. Graham
- Department of Psychiatry, University of California San Diego, La Jolla, CA, US
- Sam and Rose Stein Institute for Research on Aging, University of California La Jolla, San Diego, CA, US
| | - Tanya T. Nguyen
- Department of Psychiatry, University of California San Diego, La Jolla, CA, US
- Sam and Rose Stein Institute for Research on Aging, University of California La Jolla, San Diego, CA, US
| | - Colin A. Depp
- Department of Psychiatry, University of California San Diego, La Jolla, CA, US
- Sam and Rose Stein Institute for Research on Aging, University of California La Jolla, San Diego, CA, US
- VA San Diego Healthcare System
| | - Ellen E. Lee
- Department of Psychiatry, University of California San Diego, La Jolla, CA, US
- Sam and Rose Stein Institute for Research on Aging, University of California La Jolla, San Diego, CA, US
- VA San Diego Healthcare System
| | - Ho-Cheol Kim
- AI and Cognitive Software, IBM Research-Almaden, San Jose, CA, US
| |
Collapse
|
31
|
Abstract
Nowadays, robotics is developing at a much faster pace than ever in the past, both inside and outside industrial environments [...]
Collapse
|