1
|
Salomons N, Scassellati B. Time-dependant Bayesian knowledge tracing-Robots that model user skills over time. Front Robot AI 2024; 10:1249241. [PMID: 38469397 PMCID: PMC10925631 DOI: 10.3389/frobt.2023.1249241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 12/12/2023] [Indexed: 03/13/2024] Open
Abstract
Creating an accurate model of a user's skills is an essential task for Intelligent Tutoring Systems (ITS) and robotic tutoring systems. This allows the system to provide personalized help based on the user's knowledge state. Most user skill modeling systems have focused on simpler tasks such as arithmetic or multiple-choice questions, where the user's model is only updated upon task completion. These tasks have a single correct answer and they generate an unambiguous observation of the user's answer. This is not the case for more complex tasks such as programming or engineering tasks, where the user completing the task creates a succession of noisy user observations as they work on different parts of the task. We create an algorithm called Time-Dependant Bayesian Knowledge Tracing (TD-BKT) that tracks users' skills throughout these more complex tasks. We show in simulation that it has a more accurate model of the user's skills and, therefore, can select better teaching actions than previous algorithms. Lastly, we show that a robot can use TD-BKT to model a user and teach electronic circuit tasks to participants during a user study. Our results show that participants significantly improved their skills when modeled using TD-BKT.
Collapse
Affiliation(s)
- Nicole Salomons
- Department of Computer Science, Yale University, New Haven, CT, United States
- I-X and the Department of Computing, Imperial College London, London, United Kingdom
| | - Brian Scassellati
- Department of Computer Science, Yale University, New Haven, CT, United States
| |
Collapse
|
2
|
Parker TC, Zhang X, Noah JA, Tiede M, Scassellati B, Kelley M, McPartland JC, Hirsch J. Neural and visual processing of social gaze cueing in typical and ASD adults. medRxiv 2023:2023.01.30.23284243. [PMID: 36778502 PMCID: PMC9915835 DOI: 10.1101/2023.01.30.23284243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Atypical eye gaze in joint attention is a clinical characteristic of autism spectrum disorder (ASD). Despite this documented symptom, neural processing of joint attention tasks in real-life social interactions is not understood. To address this knowledge gap, functional-near infrared spectroscopy (fNIRS) and eye-tracking data were acquired simultaneously as ASD and typically developed (TD) individuals engaged in a gaze-directed joint attention task with a live human and robot partner. We test the hypothesis that face processing deficits in ASD are greater for interactive faces than for simulated (robot) faces. Consistent with prior findings, neural responses during human gaze cueing modulated by face visual dwell time resulted in increased activity of ventral frontal regions in ASD and dorsal parietal systems in TD participants. Hypoactivity of the right dorsal parietal area during live human gaze cueing was correlated with autism spectrum symptom severity: Brief Observations of Symptoms of Autism (BOSA) scores (r = âˆ'0.86). Contrarily, neural activity in response to robot gaze cueing modulated by visual acquisition factors activated dorsal parietal systems in ASD, and this neural activity was not related to autism symptom severity (r = 0.06). These results are consistent with the hypothesis that altered encoding of incoming facial information to the dorsal parietal cortex is specific to live human faces in ASD. These findings open new directions for understanding joint attention difficulties in ASD by providing a connection between superior parietal lobule activity and live interaction with human faces. Lay Summary Little is known about why it is so difficult for autistic individuals to make eye contact with other people. We find that in a live face-to-face viewing task with a robot, the brains of autistic participants were similar to typical participants but not when the partner was a live human. Findings suggest that difficulties in real-life social situations for autistic individuals may be specific to difficulties with live social interaction rather than general face gaze.
Collapse
|
3
|
Qin M, Brawer J, Scassellati B. Robot tool use: A survey. Front Robot AI 2023; 9:1009488. [PMID: 36726401 PMCID: PMC9885045 DOI: 10.3389/frobt.2022.1009488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023] Open
Abstract
Using human tools can significantly benefit robots in many application domains. Such ability would allow robots to solve problems that they were unable to without tools. However, robot tool use is a challenging task. Tool use was initially considered to be the ability that distinguishes human beings from other animals. We identify three skills required for robot tool use: perception, manipulation, and high-level cognition skills. While both general manipulation tasks and tool use tasks require the same level of perception accuracy, there are unique manipulation and cognition challenges in robot tool use. In this survey, we first define robot tool use. The definition highlighted the skills required for robot tool use. The skills coincide with an affordance model which defined a three-way relation between actions, objects, and effects. We also compile a taxonomy of robot tool use with insights from animal tool use literature. Our definition and taxonomy lay a theoretical foundation for future robot tool use studies and also serve as practical guidelines for robot tool use applications. We first categorize tool use based on the context of the task. The contexts are highly similar for the same task (e.g., cutting) in non-causal tool use, while the contexts for causal tool use are diverse. We further categorize causal tool use based on the task complexity suggested in animal tool use studies into single-manipulation tool use and multiple-manipulation tool use. Single-manipulation tool use are sub-categorized based on tool features and prior experiences of tool use. This type of tool may be considered as building blocks of causal tool use. Multiple-manipulation tool use combines these building blocks in different ways. The different combinations categorize multiple-manipulation tool use. Moreover, we identify different skills required in each sub-type in the taxonomy. We then review previous studies on robot tool use based on the taxonomy and describe how the relations are learned in these studies. We conclude with a discussion of the current applications of robot tool use and open questions to address future robot tool use.
Collapse
|
4
|
Mangin O, Roncone A, Scassellati B. How to be Helpful? Supportive Behaviors and Personalization for Human-Robot Collaboration. Front Robot AI 2022; 8:725780. [PMID: 35237667 PMCID: PMC8882984 DOI: 10.3389/frobt.2021.725780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 12/14/2021] [Indexed: 11/13/2022] Open
Abstract
The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in recent years. Thanks in part to advances in control and perception algorithms, robots have started to work in increasingly unstructured environments, where they operate side by side with humans to achieve shared tasks. However, little progress has been made toward the development of systems that are truly effective in supporting the human, proactive in their collaboration, and that can autonomously take care of part of the task. In this work, we present a collaborative system capable of assisting a human worker despite limited manipulation capabilities, incomplete model of the task, and partial observability of the environment. Our framework leverages information from a high-level, hierarchical model that is shared between the human and robot and that enables transparent synchronization between the peers and mutual understanding of each other’s plan. More precisely, we firstly derive a partially observable Markov model from the high-level task representation; we then use an online Monte-Carlo solver to compute a short-horizon robot-executable plan. The resulting policy is capable of interactive replanning on-the-fly, dynamic error recovery, and identification of hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a realistic furniture construction task.
Collapse
Affiliation(s)
- Olivier Mangin
- Social Robotics Lab, Computer Science Department, Yale University, New Haven, CT, United States
| | - Alessandro Roncone
- Human Interaction and Robotics Group, Computer Science Department, University of Colorado Boulder, Boulder, CO, United States
- *Correspondence: Alessandro Roncone ,
| | - Brian Scassellati
- Social Robotics Lab, Computer Science Department, Yale University, New Haven, CT, United States
| |
Collapse
|
5
|
Fraune MR, Leite I, Karatas N, Amirova A, Legeleux A, Sandygulova A, Neerincx A, Dilip Tikas G, Gunes H, Mohan M, Abbasi NI, Shenoy S, Scassellati B, de Visser EJ, Komatsu T. Lessons Learned About Designing and Conducting Studies From HRI Experts. Front Robot AI 2022; 8:772141. [PMID: 35155588 PMCID: PMC8832512 DOI: 10.3389/frobt.2021.772141] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/18/2021] [Indexed: 01/04/2023] Open
Abstract
The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees’ feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants’ responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot’s limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.
Collapse
Affiliation(s)
- Marlena R. Fraune
- Intergroup Human-Robot Interaction (iHRI) Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
- *Correspondence: Marlena R. Fraune,
| | - Iolanda Leite
- Division of Robotics, Perception, and Learning (RPL), School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Nihan Karatas
- Human-Machine Interaction (HMI) and Human Characteristics Research Division, Institutes of Innovation for Future Society, Nagoya University, Nagoya, Japan
| | - Aida Amirova
- Department of Robotics and Mechatronics, School of Engineering and Digital Sciences, Nazarbayev University, Nur-Sultan, Kazakhstan
| | - Amélie Legeleux
- Lab-STICC, University of South Brittany, CNRS UMR 6285, Brest, France
| | - Anara Sandygulova
- Department of Robotics and Mechatronics, School of Engineering and Digital Sciences, Nazarbayev University, Nur-Sultan, Kazakhstan
| | - Anouk Neerincx
- Lab-STICC, University of South Brittany, CNRS UMR 6285, Brest, France
| | - Gaurav Dilip Tikas
- Strategy, Innovation and Entrepreneurship Area, Institute of Management Technology, Ghaziabad, India
| | - Hatice Gunes
- Affective Intelligence and Robotics Lab, Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Mayumi Mohan
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Nida Itrat Abbasi
- Affective Intelligence and Robotics Lab, Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
| | - Sudhir Shenoy
- Human-AI Technology Lab, Computer Engineering Program, University of Virginia, Charlottesville, VA, United States
| | - Brian Scassellati
- Social Robotics Lab, Department of Computer Science, Yale University, New Haven, CT, United States
| | - Ewart J. de Visser
- Warfighter Effectiveness Research Center, U.S. Air Force Academy, Colorado Springs, CO, United States
| | - Takanori Komatsu
- Department of Frontier Media Science, School of Interdisciplinary Mathematical Science, Meiji University, Tokyo, Japan
| |
Collapse
|
6
|
Qin M, Brawer J, Scassellati B. Rapidly Learning Generalizable and Robot-Agnostic Tool-Use Skills for a Wide Range of Tasks. Front Robot AI 2022; 8:726463. [PMID: 34970599 PMCID: PMC8712875 DOI: 10.3389/frobt.2021.726463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 11/04/2021] [Indexed: 11/17/2022] Open
Abstract
Many real-world applications require robots to use tools. However, robots lack the skills necessary to learn and perform many essential tool-use tasks. To this end, we present the TRansferrIng Skilled Tool Use Acquired Rapidly (TRI-STAR) framework for task-general robot tool use. TRI-STAR has three primary components: 1) the ability to learn and apply tool-use skills to a wide variety of tasks from a minimal number of training demonstrations, 2) the ability to generalize learned skills to other tools and manipulated objects, and 3) the ability to transfer learned skills to other robots. These capabilities are enabled by TRI-STAR’s task-oriented approach, which identifies and leverages structural task knowledge through the use of our goal-based task taxonomy. We demonstrate this framework with seven tasks that impose distinct requirements on the usages of the tools, six of which were each performed on three physical robots with varying kinematic configurations. Our results demonstrate that TRI-STAR can learn effective tool-use skills from only 20 training demonstrations. In addition, our framework generalizes tool-use skills to morphologically distinct objects and transfers them to new platforms, with minor performance degradation.
Collapse
Affiliation(s)
- Meiying Qin
- Yale Social Robotics Lab, Department of Computer Science, Yale University, New Haven, CT, United States
| | - Jake Brawer
- Yale Social Robotics Lab, Department of Computer Science, Yale University, New Haven, CT, United States
| | - Brian Scassellati
- Yale Social Robotics Lab, Department of Computer Science, Yale University, New Haven, CT, United States
| |
Collapse
|
7
|
Abstract
Studies have shown that people conform their answers to match those of group members even when they believe the group’s answer to be wrong [2]. In this experiment, we test whether people conform to groups of robots and whether the robots cause informational conformity (believing the group to be correct), normative conformity (feeling peer pressure), or both. We conducted an experiment in which participants (N = 63) played a subjective game with three robots. We measured humans’ conformity to robots by how many times participants changed their preliminary answers to match the group of robots’ in their final answer. Participants in conditions that were given more information about the robots’ answers conformed significantly more than those who were given less, indicating that informational conformity is present. Participants in conditions where they were aware they were a minority in their answers conformed more than those who were unaware they were a minority. Additionally, they also report feeling more pressure to change their answers from the robots, and the amount of pressure they reported was correlated to the frequency they conformed, indicating normative conformity. Therefore, we conclude that robots can cause both informational and normative conformity in people.
Collapse
|
8
|
Kelley MS, Noah JA, Zhang X, Scassellati B, Hirsch J. Comparison of Human Social Brain Activity During Eye-Contact With Another Human and a Humanoid Robot. Front Robot AI 2021; 7:599581. [PMID: 33585574 PMCID: PMC7879449 DOI: 10.3389/frobt.2020.599581] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 12/07/2020] [Indexed: 01/17/2023] Open
Abstract
Robot design to simulate interpersonal social interaction is an active area of research with applications in therapy and companionship. Neural responses to eye-to-eye contact in humans have recently been employed to determine the neural systems that are active during social interactions. Whether eye-contact with a social robot engages the same neural system remains to be seen. Here, we employ a similar approach to compare human-human and human-robot social interactions. We assume that if human-human and human-robot eye-contact elicit similar neural activity in the human, then the perceptual and cognitive processing is also the same for human and robot. That is, the robot is processed similar to the human. However, if neural effects are different, then perceptual and cognitive processing is assumed to be different. In this study neural activity was compared for human-to-human and human-to-robot conditions using near infrared spectroscopy for neural imaging, and a robot (Maki) with eyes that blink and move right and left. Eye-contact was confirmed by eye-tracking for both conditions. Increased neural activity was observed in human social systems including the right temporal parietal junction and the dorsolateral prefrontal cortex during human-human eye contact but not human-robot eye-contact. This suggests that the type of human-robot eye-contact used here is not sufficient to engage the right temporoparietal junction in the human. This study establishes a foundation for future research into human-robot eye-contact to determine how elements of robot design and behavior impact human social processing within this type of interaction and may offer a method for capturing difficult to quantify components of human-robot interaction, such as social engagement.
Collapse
Affiliation(s)
- Megan S. Kelley
- Interdepartmental Neuroscience Program, Yale School of Medicine, New Haven, CT, United States
- Brain Function Laboratory, Department of Psychiatry, Yale School of Medicine, New Haven, CT, United States
| | - J. Adam Noah
- Brain Function Laboratory, Department of Psychiatry, Yale School of Medicine, New Haven, CT, United States
| | - Xian Zhang
- Brain Function Laboratory, Department of Psychiatry, Yale School of Medicine, New Haven, CT, United States
| | - Brian Scassellati
- Social Robotics Laboratory, Department of Computer Science, Yale University, New Haven, CT, United States
| | - Joy Hirsch
- Interdepartmental Neuroscience Program, Yale School of Medicine, New Haven, CT, United States
- Brain Function Laboratory, Department of Psychiatry, Yale School of Medicine, New Haven, CT, United States
- Departments of Neuroscience and Comparative Medicine, Yale School of Medicine, New Haven, CT, United States
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| |
Collapse
|
9
|
Sebo S, Dong LL, Chang N, Lewkowicz M, Schutzman M, Scassellati B. The Influence of Robot Verbal Support on Human Team Members: Encouraging Outgroup Contributions and Suppressing Ingroup Supportive Behavior. Front Psychol 2021; 11:590181. [PMID: 33424708 PMCID: PMC7793683 DOI: 10.3389/fpsyg.2020.590181] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/24/2020] [Indexed: 11/13/2022] Open
Abstract
As teams of people increasingly incorporate robot members, it is essential to consider how a robot's actions may influence the team's social dynamics and interactions. In this work, we investigated the effects of verbal support from a robot (e.g., “good idea Salim,” “yeah”) on human team members' interactions related to psychological safety and inclusion. We conducted a between-subjects experiment (N = 39 groups, 117 participants) where the robot team member either (A) gave verbal support or (B) did not give verbal support to the human team members of a human-robot team comprised of 2 human ingroup members, 1 human outgroup member, and 1 robot. We found that targeted support from the robot (e.g., “good idea George”) had a positive effect on outgroup members, who increased their verbal participation after receiving targeted support from the robot. When comparing groups that did and did not have verbal support from the robot, we found that outgroup members received fewer verbal backchannels from ingroup members if their group had robot verbal support. These results suggest that verbal support from a robot may have some direct benefits to outgroup members but may also reduce the obligation ingroup members feel to support the verbal contributions of outgroup members.
Collapse
Affiliation(s)
- Sarah Sebo
- Department of Computer Science, University of Chicago, Chicago, IL, United States.,Department of Computer Science, Yale University, New Haven, CT, United States
| | - Ling Liang Dong
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Nicholas Chang
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Michal Lewkowicz
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Michael Schutzman
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Brian Scassellati
- Department of Computer Science, Yale University, New Haven, CT, United States
| |
Collapse
|
10
|
Yang GZ, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R, Jacobstein N, Kumar V, McNutt M, Merrifield R, Nelson BJ, Scassellati B, Taddeo M, Taylor R, Veloso M, Wang ZL, Wood R. The grand challenges of Science Robotics. Sci Robot 2021; 3:3/14/eaar7650. [PMID: 33141701 DOI: 10.1126/scirobotics.aar7650] [Citation(s) in RCA: 351] [Impact Index Per Article: 117.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 01/12/2018] [Indexed: 12/17/2022]
Abstract
One of the ambitions of Science Robotics is to deeply root robotics research in science while developing novel robotic platforms that will enable new scientific discoveries. Of our 10 grand challenges, the first 7 represent underpinning technologies that have a wider impact on all application areas of robotics. For the next two challenges, we have included social robotics and medical robotics as application-specific areas of development to highlight the substantial societal and health impacts that they will bring. Finally, the last challenge is related to responsible innovation and how ethics and security should be carefully considered as we develop the technology further.
Collapse
Affiliation(s)
- Guang-Zhong Yang
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK.
| | - Jim Bellingham
- Center for Marine Robotics, Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
| | - Pierre E Dupont
- Department of Cardiovascular Surgery, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Peer Fischer
- Institute of Physical Chemistry, University of Stuttgart, Stuttgart, Germany.,Micro, Nano, and Molecular Systems Laboratory, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Luciano Floridi
- Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK.,Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK.,Department of Computer Science, University of Oxford, Oxford, UK.,Data Ethics Group, Alan Turing Institute, London, UK.,Department of Economics, American University, Washington, DC 20016, USA
| | - Robert Full
- Department of Integrative Biology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Neil Jacobstein
- Singularity University, NASA Research Park, Moffett Field, CA 94035, USA.,MediaX, Stanford University, Stanford, CA 94305, USA
| | - Vijay Kumar
- Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marcia McNutt
- National Academy of Sciences, Washington, DC 20418, USA
| | - Robert Merrifield
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK
| | - Bradley J Nelson
- Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zurich, Switzerland
| | - Brian Scassellati
- Department of Computer Science, Yale University, New Haven, CT 06520, USA.,Department Mechanical Engineering and Materials Science, Yale University, New Haven, CT 06520, USA
| | - Mariarosaria Taddeo
- Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK.,Department of Computer Science, University of Oxford, Oxford, UK.,Data Ethics Group, Alan Turing Institute, London, UK
| | - Russell Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Manuela Veloso
- Machine Learning Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Zhong Lin Wang
- School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Robert Wood
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA.,Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, MA 02138, USA
| |
Collapse
|
11
|
Scassellati B, Vázquez M. The potential of socially assistive robots during infectious disease outbreaks. Sci Robot 2020; 5:5/44/eabc9014. [PMID: 33022606 DOI: 10.1126/scirobotics.abc9014] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 06/04/2020] [Indexed: 11/02/2022]
Abstract
Robots have a role in addressing the secondary impacts of infectious disease outbreaks by helping us sustain social distancing, monitoring and improving mental health, supporting education, and aiding in economic recovery.
Collapse
|
12
|
Traeger ML, Strohkorb Sebo S, Jung M, Scassellati B, Christakis NA. Vulnerable robots positively shape human conversational dynamics in a human-robot team. Proc Natl Acad Sci U S A 2020; 117:6370-6375. [PMID: 32152118 PMCID: PMC7104178 DOI: 10.1073/pnas.1910402117] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot-human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game (n = 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines.
Collapse
Affiliation(s)
- Margaret L Traeger
- Yale Institute for Network Science, Yale University, New Haven, CT 06520
- Department of Sociology, Yale University, New Haven, CT 06520
| | | | - Malte Jung
- Department of Information Science, Cornell University, Ithaca, NY 14853
| | - Brian Scassellati
- Department of Computer Science, Yale University, New Haven, CT 06520
| | - Nicholas A Christakis
- Yale Institute for Network Science, Yale University, New Haven, CT 06520;
- Department of Sociology, Yale University, New Haven, CT 06520
- Department of Biomedical Engineering, Yale University, New Haven, CT 06520
- Department of Statistics and Data Science, Yale University, New Haven, CT 06520
| |
Collapse
|
13
|
Abstract
Personalized learning environments have the potential to improve learning outcomes for children in a variety of educational domains, as they can tailor instruction based on the unique learning needs of individuals. Robot tutoring systems can further engage users by leveraging their potential for embodied social interaction and take into account crucial aspects of a learner, such as a student’s motivation in learning. In this article, we demonstrate that motivation in young learners corresponds to observable behaviors when interacting with a robot tutoring system, which, in turn, impact learning outcomes. We first detail a user study involving children interacting one on one with a robot tutoring system over multiple sessions. Based on empirical data, we show that academic motivation stemming from one’s own values or goals as assessed by the Academic Self-Regulation Questionnaire (SRQ-A) correlates to observed suboptimal help-seeking behavior during the initial tutoring session. We then show how an interactive robot that responds intelligently to these observed behaviors in subsequent tutoring sessions can positively impact both student behavior and learning outcomes over time. These results provide empirical evidence for the link between internal motivation, observable behavior, and learning outcomes in the context of robot--child tutoring. We also identified an additional suboptimal behavioral feature within our tutoring environment and demonstrated its relationship to internal factors of motivation, suggesting further opportunities to design robot intervention to enhance learning. We provide insights on the design of robot tutoring systems aimed to deliver effective behavioral intervention during learning interactions for children and present a discussion on the broader challenges currently faced by robot--child tutoring systems.
Collapse
|
14
|
Abstract
The benefits of personalized social robots must be evaluated in real-world educational contexts over periods of time longer than a single session to understand their full potential to impact learning outcomes. In this work, we describe a personalization system designed for longer-term personalization that orders curriculum based on an adaptive Hidden Markov Model (HMM) that evaluates students’ skill proficiencies. We present a study investigating the effectiveness of this system in a five-session interaction with a robot tutor, taking place over the course of 2 weeks. Our system is evaluated in the context of native Spanish-speaking first-graders interacting with a social robot tutor while completing an English Language Learning educational task. Participants either received lessons: (1) ordered by our adaptive HMM personalization system which selects a lesson based on a skill that the individual participant needs more practice with (“personalized condition”) or (2) ordered randomly from among the lessons the participant had not yet seen (“non-personalized condition”). We found that participants who received personalized lessons from the robot tutor outperformed participants who received non-personalized lessons on a post-test by 2.0 standard deviations on average, corresponding to a mean learning gain in the 98th percentile.
Collapse
|
15
|
Scassellati B, Boccanfuso L, Huang CM, Mademtzi M, Qin M, Salomons N, Ventola P, Shic F. Improving social skills in children with ASD using a long-term, in-home social robot. Sci Robot 2018; 3:eaat7544. [PMID: 33141724 PMCID: PMC10957097 DOI: 10.1126/scirobotics.aat7544] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2018] [Accepted: 07/30/2018] [Indexed: 02/17/2024]
Abstract
Social robots can offer tremendous possibilities for autism spectrum disorder (ASD) interventions. To date, most studies with this population have used short, isolated encounters in controlled laboratory settings. Our study focused on a 1-month, home-based intervention for increasing social communication skills of 12 children with ASD between 6 and 12 years old using an autonomous social robot. The children engaged in a triadic interaction with a caregiver and the robot for 30 min every day to complete activities on emotional storytelling, perspective-taking, and sequencing. The robot encouraged engagement, adapted the difficulty of the activities to the child's past performance, and modeled positive social skills. The system maintained engagement over the 1-month deployment, and children showed improvement on joint attention skills with adults when not in the presence of the robot. These results were also consistent with caregiver questionnaires. Caregivers reported less prompting over time and overall increased communication.
Collapse
Affiliation(s)
- B. Scassellati
- Department of Computer Science, Yale University, New Haven, CT 06520
| | - L. Boccanfuso
- Child Study Center, Yale School of Medicine, New Haven, CT 06520
| | - C.-M. Huang
- Department of Computer Science, Yale University, New Haven, CT 06520
| | - M. Mademtzi
- Child Study Center, Yale School of Medicine, New Haven, CT 06520
| | - M. Qin
- Department of Computer Science, Yale University, New Haven, CT 06520
| | - N. Salomons
- Department of Computer Science, Yale University, New Haven, CT 06520
| | - P. Ventola
- Child Study Center, Yale School of Medicine, New Haven, CT 06520
| | - F. Shic
- Child Study Center, Yale School of Medicine, New Haven, CT 06520
| |
Collapse
|
16
|
Belpaeme T, Kennedy J, Ramachandran A, Scassellati B, Tanaka F. Social robots for education: A review. Sci Robot 2018; 3:3/21/eaat5954. [PMID: 33141719 DOI: 10.1126/scirobotics.aat5954] [Citation(s) in RCA: 200] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Accepted: 07/23/2018] [Indexed: 11/02/2022]
Abstract
Social robots can be used in education as tutors or peer learners. They have been shown to be effective at increasing cognitive and affective outcomes and have achieved outcomes similar to those of human tutoring on restricted tasks. This is largely because of their physical presence, which traditional learning technologies lack. We review the potential of social robots in education, discuss the technical challenges, and consider how the robot's appearance and behavior affect learning outcomes.
Collapse
Affiliation(s)
- Tony Belpaeme
- Ghent University, Ghent, Belgium. .,University of Plymouth, Plymouth, UK
| | | | | | | | | |
Collapse
|
17
|
Leite I, McCoy M, Lohani M, Ullman D, Salomons N, Stokes C, Rivers S, Scassellati B. Narratives with Robots: The Impact of Interaction Context and Individual Differences on Story Recall and Emotional Understanding. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00029] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
18
|
|
19
|
Wang Q, Kim E, Chawarska K, Scassellati B, Zucker S, Shic F. On Relationships Between Fixation Identification Algorithms and Fractal Box Counting Methods. Proc Eye Track Res Appl Symp 2014; 2014:67-74. [PMID: 26504903 DOI: 10.1145/2578153.2578161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Fixation identification algorithms facilitate data comprehension and provide analytical convenience in eye-tracking analysis. However, current fixation algorithms for eye-tracking analysis are heavily dependent on parameter choices, leading to instabilities in results and incompleteness in reporting. This work examines the nature of human scanning patterns during complex scene viewing. We show that standard implementations of the commonly used distance-dispersion algorithm for fixation identification are functionally equivalent to greedy spatiotemporal tiling. We show that modeling the number of fixations as a function of tiling size leads to a measure of fractal dimensionality through box counting. We apply this technique to examine scale-free gaze behaviors in toddlers and adults looking at images of faces and blocks, as well as large number of adults looking at movies or static images. The distributional aspects of the number of fixations may suggest a fractal structure to gaze patterns in free scanning and imply that the incompleteness of standard algorithms may be due to the scale-free behaviors of the underlying scanning distributions. We discuss the nature of this hypothesis, its limitations, and offer directions for future work.
Collapse
|
20
|
Kim ES, Berkovits LD, Bernier EP, Leyzberg D, Shic F, Paul R, Scassellati B. Social Robots as Embedded Reinforcers of Social Behavior in Children with Autism. J Autism Dev Disord 2012; 43:1038-49. [DOI: 10.1007/s10803-012-1645-2] [Citation(s) in RCA: 271] [Impact Index Per Article: 22.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
21
|
Affiliation(s)
- Brian Scassellati
- Department of Computer Science, Yale University, New Haven, Connecticut 06520;
| | - Henny Admoni
- Department of Computer Science, Yale University, New Haven, Connecticut 06520;
| | - Maja Matarić
- Departments of Computer Science and Pediatrics, University of Southern California, Los Angeles, California 90089-1450;
| |
Collapse
|
22
|
|
23
|
McClelland J, Weng J, Deák G, Scassellati B. Cognitive Science Meets Autonomous Mental Development. Cogn Sci 2010; 34:533-4. [DOI: 10.1111/j.1551-6709.2010.01097.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
24
|
|
25
|
|
26
|
Abstract
The study of social learning in robotics has been motivated by both scientific interest in the learning process and practical desires to produce machines that are useful, flexible, and easy to use. In this review, we introduce the social and task-oriented aspects of robot imitation. We focus on methodologies for addressing two fundamental problems. First, how does the robot know what to imitate? And second, how does the robot map that perception onto its own action repertoire to replicate it? In the future, programming humanoid robots to perform new tasks might be as simple as showing them.
Collapse
Affiliation(s)
- Cynthia Breazeal
- The Media Lab, Massachusetts Institute of Technology, 77 Massachusetts Ave NE18-5FL, 02139, Cambridge MA, USA
| | | |
Collapse
|
27
|
|
28
|
|
29
|
Scassellati B. Imitation and Mechanisms of Joint Attention: A Developmental Structure for Building Social Skills on a Humanoid Robot. Computation for Metaphors, Analogy, and Agents 1999. [DOI: 10.1007/3-540-48834-0_11] [Citation(s) in RCA: 62] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|