1
|
Morillo-Mendez L, Schrooten MGS, Loutfi A, Mozos OM. Age-Related Differences in the Perception of Robotic Referential Gaze in Human-Robot Interaction. Int J Soc Robot 2022:1-13. [PMID: 36185773 PMCID: PMC9510350 DOI: 10.1007/s12369-022-00926-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 11/12/2022]
Abstract
There is an increased interest in using social robots to assist older adults during their daily life activities. As social robots are designed to interact with older users, it becomes relevant to study these interactions under the lens of social cognition. Gaze following, the social ability to infer where other people are looking at, deteriorates with older age. Therefore, the referential gaze from robots might not be an effective social cue to indicate spatial locations to older users. In this study, we explored the performance of older adults, middle-aged adults, and younger controls in a task assisted by the referential gaze of a Pepper robot. We examined age-related differences in task performance, and in self-reported social perception of the robot. Our main findings show that referential gaze from a robot benefited task performance, although the magnitude of this facilitation was lower for older participants. Moreover, perceived anthropomorphism of the robot varied less as a result of its referential gaze in older adults. This research supports that social robots, even if limited in their gazing capabilities, can be effectively perceived as social entities. Additionally, this research suggests that robotic social cues, usually validated with young participants, might be less optimal signs for older adults. Supplementary Information The online version contains supplementary material available at 10.1007/s12369-022-00926-6.
Collapse
Affiliation(s)
- Lucas Morillo-Mendez
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| | | | - Amy Loutfi
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| | - Oscar Martinez Mozos
- Centre for Applied Autonomous Sensor Systems, Örebro University, Fakultetsgatan 1, Örebro, 702 81 Sweden
| |
Collapse
|
2
|
Koren Y, Feingold Polak R, Levy-Tzedek S. Extended Interviews with Stroke Patients Over a Long-Term Rehabilitation Using Human–Robot or Human–Computer Interactions. Int J Soc Robot 2022; 14:1893-1911. [PMID: 36158255 PMCID: PMC9483483 DOI: 10.1007/s12369-022-00909-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 11/30/2022]
Abstract
AbstractSocially assistive robots (SARs) have been proposed to assist post-stroke patients in performing their exercise during their rehabilitation process, with the trust in the robot identified as an important factor in human–robot interaction. In the current study, we aimed to identify and characterize factors that influence post-stroke patients’ trust in a robot-operated and a computer-operated rehabilitation platform during and after a long-term experience with the platform. We conducted 29 interviews with 16 stroke patients who underwent a long-term rehabilitation process, assisted by either a SAR or a computer interface. The intervention lasted 5–7 weeks per patient, for a total of 229 sessions over 18 months. By using a qualitative research method—extended interviews “in the wild” with stroke patients, over a long-term rehabilitation process—our study reveals users’ perspectives regarding factors affecting trust in the SAR or in the computer interface during their rehabilitation process. The results support the assertion that SARs have an added value in the rehabilitative care of stroke patients; It appears that personal characteristics, such as age and gender, have an effect on the users’ acceptance of a non-human operator as a practice assistant. Our findings support the notion that SARs augment rehabilitative therapies beyond a standard computer; Importantly, patients appreciated different aspects of the non-human operator in the two groups: In the SAR group, users preferred its functional performance over its anthropomorphized social skills; In the Computer group, users highlighted its contribution to the training of their memory skills.
Collapse
Affiliation(s)
- Yaacov Koren
- Department of Sociology and Anthropology, Tel-Aviv University, Tel-Aviv, Israel
| | - Ronit Feingold Polak
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Freiburg Institute for Advanced Studies (FRIAS), University of Freiburg, Freiburg, Germany
| |
Collapse
|
3
|
Ligthart MEU, Neerincx MA, Hindriks KV. Getting acquainted: First steps for child-robot relationship formation. Front Robot AI 2022; 9:853665. [PMID: 36185971 PMCID: PMC9520327 DOI: 10.3389/frobt.2022.853665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
In this article we discuss two studies of children getting acquainted with an autonomous socially assistive robot. The success of the first encounter is key for a sustainable long-term supportive relationship. We provide four validated behavior design elements that enable the robot to robustly get acquainted with the child. The first are five conversational patterns that allow children to comfortably self-disclose to the robot. The second is a reciprocation strategy that enables the robot to adequately respond to the children’s self-disclosures. The third is a ‘how to talk to me’ tutorial. The fourth is a personality profile for the robot that creates more rapport and comfort between the child and the robot. The designs were validated with two user studies (N1 = 30, N2 = 75, 8–11 years. o. children). The results furthermore showed similarities between how children form relationships with people and how children form relationships with robots. Most importantly, self-disclosure, and specifically how intimate the self-disclosures are, is an important predictor for the success of child-robot relationship formation. Speech recognition errors reduces the intimacy and feeling similar to the robot increases the intimacy of self-disclosures.
Collapse
Affiliation(s)
| | - Mark A. Neerincx
- Interactive Intelligence, Delft University of Technology, Delft, Netherlands
- Perceptual & Cognitive Systems, TNO, Soesterberg, Netherlands
- *Correspondence: Mark A. Neerincx,
| | | |
Collapse
|
4
|
Autonomous Critical Help by a Robotic Assistant in the Field of Cultural Heritage: A New Challenge for Evolving Human-Robot Interaction. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6080069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Over the years, the purpose of cultural heritage (CH) sites (e.g., museums) has focused on providing personalized services to different users, with the main goal of adapting those services to the visitors’ personal traits, goals, and interests. In this work, we propose a computational cognitive model that provides an artificial agent (e.g., robot, virtual assistant) with the capability to personalize a museum visit to the goals and interests of the user that intends to visit the museum by taking into account the goals and interests of the museum curators that have designed the exhibition. In particular, we introduce and analyze a special type of help (critical help) that leads to a substantial change in the user’s request, with the objective of taking into account the needs that the same user cannot or has not been able to assess. The computational model has been implemented by exploiting the multi-agent oriented programming (MAOP) framework JaCaMo, which integrates three different multi-agent programming levels. We provide the results of a pilot study that we conducted in order to test the potential of the computational model. The experiment was conducted with 26 real participants that have interacted with the humanoid robot Nao, widely used in Human-Robot interaction (HRI) scenarios.
Collapse
|
5
|
Bartlett ME, Edmunds CER, Belpaeme T, Thill S. Have I Got the Power? Analysing and Reporting Statistical Power in HRI. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3495246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
This article presents a discussion of the importance of power analyses, providing an overview of when power analyses should be run in the context of the field of Human-Robot Interaction, as well as some examples of how to perform a power analysis. This work was motivated by the observation that the majority of papers published in the proceedings of recent HRI conferences did not report conducting a power analysis; an observation that has concerning implications for many conclusions drawn by these studies. This work is intended to raise awareness and encourage researchers to conduct power analyses when designing research studies using human participants.
Collapse
Affiliation(s)
- Madeleine E. Bartlett
- University of Waterloo, Canada and CRNS, University of Plymouth, Plymouth, United Kingdom
| | | | | | - Serge Thill
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Matsumoto S, Washburn A, Riek LD. A Framework to Explore Proximate Human-Robot Coordination. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Proximate human-robot teaming (pxHRT) is a complex subspace within human-robot interaction. Studies in this space involve a range of equipment and methods, including the ability to sense people and robots precisely. Research in this area draws from a wide variety of other fields, from human-human interaction to control theory, making study design complex, particularly for those outside the field of HRI. In this paper, we introduce a framework that helps researchers consider tradeoffs across various task contexts, platforms, sensors, and analysis methods; metrics frequently used in the field; and common challenges researchers may face. We demonstrate the use of the framework via a case study which employs an autonomous mobile manipulator continuously engaging in shared workspace, handover, and co-manipulation tasks with people, and explores the effect of cognitive workload on pxHRT dynamics. We also demonstrate the utility of the framework in a case study with two groups of researchers new to pxHRT. With this framework, we hope to enable researchers, especially those outside HRI, to more thoroughly consider these complex components within their studies, more easily design experiments, and more fully explore research questions within the space of pxHRT.
Collapse
|
7
|
Andriella A, Torras C, Abdelnour C, Alenyà G. Introducing CARESSER: A framework for in situ learning robot social assistance from expert knowledge and demonstrations. USER MODELING AND USER-ADAPTED INTERACTION 2022; 33:441-496. [PMID: 35311217 PMCID: PMC8916953 DOI: 10.1007/s11257-021-09316-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 11/28/2021] [Indexed: 06/14/2023]
Abstract
Socially assistive robots have the potential to augment and enhance therapist's effectiveness in repetitive tasks such as cognitive therapies. However, their contribution has generally been limited as domain experts have not been fully involved in the entire pipeline of the design process as well as in the automatisation of the robots' behaviour. In this article, we present aCtive leARning agEnt aSsiStive bEhaviouR (CARESSER), a novel framework that actively learns robotic assistive behaviour by leveraging the therapist's expertise (knowledge-driven approach) and their demonstrations (data-driven approach). By exploiting that hybrid approach, the presented method enables in situ fast learning, in a fully autonomous fashion, of personalised patient-specific policies. With the purpose of evaluating our framework, we conducted two user studies in a daily care centre in which older adults affected by mild dementia and mild cognitive impairment (N = 22) were requested to solve cognitive exercises with the support of a therapist and later on of a robot endowed with CARESSER. Results showed that: (i) the robot managed to keep the patients' performance stable during the sessions even more so than the therapist; (ii) the assistance offered by the robot during the sessions eventually matched the therapist's preferences. We conclude that CARESSER, with its stakeholder-centric design, can pave the way to new AI approaches that learn by leveraging human-human interactions along with human expertise, which has the benefits of speeding up the learning process, eliminating the need for the design of complex reward functions, and finally avoiding undesired states.
Collapse
Affiliation(s)
- Antonio Andriella
- CSIC-UPC, Institut de Robòtica i Informàtica Industrial, C/Llorens i Artigas 4-6, 08028 Barcelona, Spain
| | - Carme Torras
- CSIC-UPC, Institut de Robòtica i Informàtica Industrial, C/Llorens i Artigas 4-6, 08028 Barcelona, Spain
| | - Carla Abdelnour
- Research Center and Memory Clinic, Fundació ACE, Institut Català de Neurociències Aplicades, Universitat Internacional de Catalunya, Barcelona, Spain
| | - Guillem Alenyà
- CSIC-UPC, Institut de Robòtica i Informàtica Industrial, C/Llorens i Artigas 4-6, 08028 Barcelona, Spain
| |
Collapse
|
8
|
Leichtmann B, Nitsch V, Mara M. Crisis Ahead? Why Human-Robot Interaction User Studies May Have Replicability Problems and Directions for Improvement. Front Robot AI 2022; 9:838116. [PMID: 35360497 PMCID: PMC8961736 DOI: 10.3389/frobt.2022.838116] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 02/17/2022] [Indexed: 11/17/2022] Open
Abstract
There is a confidence crisis in many scientific disciplines, in particular disciplines researching human behavior, as many effects of original experiments have not been replicated successfully in large-scale replication studies. While human-robot interaction (HRI) is an interdisciplinary research field, the study of human behavior, cognition and emotion in HRI plays also a vital part. Are HRI user studies facing the same problems as other fields and if so, what can be done to overcome them? In this article, we first give a short overview of the replicability crisis in behavioral sciences and its causes. In a second step, we estimate the replicability of HRI user studies mainly 1) by structural comparison of HRI research processes and practices with those of other disciplines with replicability issues, 2) by systematically reviewing meta-analyses of HRI user studies to identify parameters that are known to affect replicability, and 3) by summarizing first replication studies in HRI as direct evidence. Our findings suggest that HRI user studies often exhibit the same problems that caused the replicability crisis in many behavioral sciences, such as small sample sizes, lack of theory, or missing information in reported data. In order to improve the stability of future HRI research, we propose some statistical, methodological and social reforms. This article aims to provide a basis for further discussion and a potential outline for improvements in the field.
Collapse
Affiliation(s)
- Benedikt Leichtmann
- LIT Robopsychology Lab, Johannes Kepler University Linz, Linz, Austria
- *Correspondence: Benedikt Leichtmann,
| | - Verena Nitsch
- Institute of Industrial Engineering and Ergonomics, RWTH Aachen University, Aachen, Germany
| | - Martina Mara
- LIT Robopsychology Lab, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
9
|
Tuncer S, Gillet S, Leite I. Robot-Mediated Inclusive Processes in Groups of Children: From Gaze Aversion to Mutual Smiling Gaze. Front Robot AI 2022; 9:729146. [PMID: 35308460 PMCID: PMC8927292 DOI: 10.3389/frobt.2022.729146] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 01/07/2022] [Indexed: 01/10/2023] Open
Abstract
Our work is motivated by the idea that social robots can help inclusive processes in groups of children, focusing on the case of children who have newly arrived from a foreign country and their peers at school. Building on an initial study where we tested different robot behaviours and recorded children’s interactions mediated by a robot in a game, we present in this paper the findings from a subsequent analysis of the same video data drawing from ethnomethodology and conversation analysis. We describe how this approach differs from predominantly quantitative video analysis in HRI; how mutual gaze appeared as a challenging interactional accomplishment between unacquainted children, and why we focused on this phenomenon. We identify two situations and trajectories in which children make eye contact: asking for or giving instructions, and sharing an emotional reaction. Based on detailed analyses of a selection of extracts in the empirical section, we describe patterns and discuss the links between the different situations and trajectories, and relationship building. Our findings inform HRI and robot design by identifying complex interactional accomplishments between two children, as well as group dynamics which support these interactions. We argue that social robots should be able to perceive such phenomena in order to better support inclusion of outgroup children. Lastly, by explaining how we combined approaches and showing how they build on each other, we also hope to demonstrate the value of interdisciplinary research, and encourage it.
Collapse
|
10
|
Fraune MR, Leite I, Karatas N, Amirova A, Legeleux A, Sandygulova A, Neerincx A, Dilip Tikas G, Gunes H, Mohan M, Abbasi NI, Shenoy S, Scassellati B, de Visser EJ, Komatsu T. Lessons Learned About Designing and Conducting Studies From HRI Experts. Front Robot AI 2022; 8:772141. [PMID: 35155588 PMCID: PMC8832512 DOI: 10.3389/frobt.2021.772141] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/18/2021] [Indexed: 01/04/2023] Open
Abstract
The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees’ feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants’ responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot’s limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.
Collapse
Affiliation(s)
- Marlena R. Fraune
- Intergroup Human-Robot Interaction (iHRI) Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
- *Correspondence: Marlena R. Fraune,
| | - Iolanda Leite
- Division of Robotics, Perception, and Learning (RPL), School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Nihan Karatas
- Human-Machine Interaction (HMI) and Human Characteristics Research Division, Institutes of Innovation for Future Society, Nagoya University, Nagoya, Japan
| | - Aida Amirova
- Department of Robotics and Mechatronics, School of Engineering and Digital Sciences, Nazarbayev University, Nur-Sultan, Kazakhstan
| | - Amélie Legeleux
- Lab-STICC, University of South Brittany, CNRS UMR 6285, Brest, France
| | - Anara Sandygulova
- Department of Robotics and Mechatronics, School of Engineering and Digital Sciences, Nazarbayev University, Nur-Sultan, Kazakhstan
| | - Anouk Neerincx
- Lab-STICC, University of South Brittany, CNRS UMR 6285, Brest, France
| | - Gaurav Dilip Tikas
- Strategy, Innovation and Entrepreneurship Area, Institute of Management Technology, Ghaziabad, India
| | - Hatice Gunes
- Affective Intelligence and Robotics Lab, Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Mayumi Mohan
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Nida Itrat Abbasi
- Affective Intelligence and Robotics Lab, Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Sudhir Shenoy
- Human-AI Technology Lab, Computer Engineering Program, University of Virginia, Charlottesville, VA, United States
| | - Brian Scassellati
- Social Robotics Lab, Department of Computer Science, Yale University, New Haven, CT, United States
| | - Ewart J. de Visser
- Warfighter Effectiveness Research Center, U.S. Air Force Academy, Colorado Springs, CO, United States
| | - Takanori Komatsu
- Department of Frontier Media Science, School of Interdisciplinary Mathematical Science, Meiji University, Tokyo, Japan
| |
Collapse
|
11
|
Inamura T, Mizuchi Y. SIGVerse: A Cloud-Based VR Platform for Research on Multimodal Human-Robot Interaction. Front Robot AI 2021; 8:549360. [PMID: 34136534 PMCID: PMC8202404 DOI: 10.3389/frobt.2021.549360] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 05/18/2021] [Indexed: 11/16/2022] Open
Abstract
Research on Human-Robot Interaction (HRI) requires the substantial consideration of an experimental design, as well as a significant amount of time to practice the subject experiment. Recent technology in virtual reality (VR) can potentially address these time and effort challenges. The significant advantages of VR systems for HRI are: 1) cost reduction, as experimental facilities are not required in a real environment; 2) provision of the same environmental and embodied interaction conditions to test subjects; 3) visualization of arbitrary information and situations that cannot occur in reality, such as playback of past experiences, and 4) ease of access to an immersive and natural interface for robot/avatar teleoperations. Although VR tools with their features have been applied and developed in previous HRI research, all-encompassing tools or frameworks remain unavailable. In particular, the benefits of integration with cloud computing have not been comprehensively considered. Hence, the purpose of this study is to propose a research platform that can comprehensively provide the elements required for HRI research by integrating VR and cloud technologies. To realize a flexible and reusable system, we developed a real-time bridging mechanism between the robot operating system (ROS) and Unity. To confirm the feasibility of the system in a practical HRI scenario, we applied the proposed system to three case studies, including a robot competition named RoboCup@Home. via these case studies, we validated the system’s usefulness and its potential for the development and evaluation of social intelligence via multimodal HRI.
Collapse
Affiliation(s)
- Tetsunari Inamura
- National Institute of Informatics, Tokyo, Japan.,Department of Informatics, The Graduate University for Advanced Studies (SOKENDAI), Tokyo, Japan
| | | |
Collapse
|
12
|
Henschel A, Laban G, Cross ES. What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You. CURRENT ROBOTICS REPORTS 2021; 2:9-19. [PMID: 34977592 PMCID: PMC7860159 DOI: 10.1007/s43154-020-00035-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/21/2020] [Indexed: 12/17/2022]
Abstract
Purpose of Review We provide an outlook on the definitions, laboratory research, and applications of social robots, with an aim to understand what makes a robot social—in the eyes of science and the general public. Recent Findings Social robots demonstrate their potential when deployed within contexts appropriate to their form and functions. Some examples include companions for the elderly and cognitively impaired individuals, robots within educational settings, and as tools to support cognitive and behavioural change interventions. Summary Science fiction has inspired us to conceive of a future with autonomous robots helping with every aspect of our daily lives, although the robots we are familiar with through film and literature remain a vision of the distant future. While there are still miles to go before robots become a regular feature within our social spaces, rapid progress in social robotics research, aided by the social sciences, is helping to move us closer to this reality.
Collapse
Affiliation(s)
- Anna Henschel
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Guy Laban
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S Cross
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland.,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|