1
|
Dubois-Sage M, Jacquet B, Jamet F, Baratgin J. People with Autism Spectrum Disorder Could Interact More Easily with a Robot than with a Human: Reasons and Limits. Behav Sci (Basel) 2024; 14:131. [PMID: 38392485 PMCID: PMC10886012 DOI: 10.3390/bs14020131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 02/02/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024] Open
Abstract
Individuals with Autism Spectrum Disorder show deficits in communication and social interaction, as well as repetitive behaviors and restricted interests. Interacting with robots could bring benefits to this population, notably by fostering communication and social interaction. Studies even suggest that people with Autism Spectrum Disorder could interact more easily with a robot partner rather than a human partner. We will be looking at the benefits of robots and the reasons put forward to explain these results. The interest regarding robots would mainly be due to three of their characteristics: they can act as motivational tools, and they are simplified agents whose behavior is more predictable than that of a human. Nevertheless, there are still many challenges to be met in specifying the optimum conditions for using robots with individuals with Autism Spectrum Disorder.
Collapse
Affiliation(s)
- Marion Dubois-Sage
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
| | - Baptiste Jacquet
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
- Association P-A-R-I-S, 75005 Paris, France
| | - Frank Jamet
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
- Association P-A-R-I-S, 75005 Paris, France
- UFR d'Éducation, CY Cergy Paris Université, 95000 Cergy-Pontoise, France
| | - Jean Baratgin
- Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U, UFR de Psychologie, Université Paris 8, 93526 Saint-Denis, France
- Association P-A-R-I-S, 75005 Paris, France
| |
Collapse
|
2
|
Zonca J, Folsø A, Sciutti A. Social Influence Under Uncertainty in Interaction with Peers, Robots and Computers. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00959-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
AbstractTaking advice from others requires confidence in their competence. This is important for interaction with peers, but also for collaboration with social robots and artificial agents. Nonetheless, we do not always have access to information about others’ competence or performance. In these uncertain environments, do our prior beliefs about the nature and the competence of our interacting partners modulate our willingness to rely on their judgments? In a joint perceptual decision making task, participants made perceptual judgments and observed the simulated estimates of either a human participant, a social humanoid robot or a computer. Then they could modify their estimates based on this feedback. Results show participants’ belief about the nature of their partner biased their compliance with its judgments: participants were more influenced by the social robot than human and computer partners. This difference emerged strongly at the very beginning of the task and decreased with repeated exposure to empirical feedback on the partner’s responses, disclosing the role of prior beliefs in social influence under uncertainty. Furthermore, the results of our functional task suggest an important difference between human–human and human–robot interaction in the absence of overt socially relevant signal from the partner: the former is modulated by social normative mechanisms, whereas the latter is guided by purely informational mechanisms linked to the perceived competence of the partner.
Collapse
|
3
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
4
|
British Children’s and Adults’ Perceptions of Robots. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2022. [DOI: 10.1155/2022/3813820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Robotics and artificial intelligence (AI) systems are quickly becoming a familiar part of different aspects of everyday life. We know very little about how children and adults perceive the abilities of different robots and whether these ascriptions are associated with a willingness to interact with a robot. In the current study, we asked British children aged 4–13 years and British adults to complete an online experiment. Participants were asked to describe what a robot looks like, give their preference for various types of robots (a social robot, a machine-like robot, and a human-like robot), and answer whether they were willing to engage in different activities with the different robots. Results showed that younger children (4 to 8 years old) are more willing to engage with robots compared to older children (9 to 13 years) and adults. Specifically, younger children were more likely to see robots as kind compared to older children and adults. Younger children were also more likely to rate the social robot as helpful compared to older children and adults. This is also the first study to examine preferences for robots engaging in religious activities, and results show that British adults prefer humans over robots to pray for them but such biases may not be generally applicable to children. These results provide new insight into how children and adults in the United Kingdom accept the presence and function of robots.
Collapse
|
5
|
Zonca J, Folsø A, Sciutti A. The role of reciprocity in human-robot social influence. iScience 2021; 24:103424. [PMID: 34877490 PMCID: PMC8633024 DOI: 10.1016/j.isci.2021.103424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 10/11/2021] [Accepted: 11/05/2021] [Indexed: 11/19/2022] Open
Abstract
Humans are constantly influenced by others’ behavior and opinions. Of importance, social influence among humans is shaped by reciprocity: we follow more the advice of someone who has been taking into consideration our opinions. In the current work, we investigate whether reciprocal social influence can emerge while interacting with a social humanoid robot. In a joint task, a human participant and a humanoid robot made perceptual estimates and then could overtly modify them after observing the partner’s judgment. Results show that endowing the robot with the ability to express and modulate its own level of susceptibility to the human’s judgments represented a double-edged sword. On the one hand, participants lost confidence in the robot’s competence when the robot was following their advice; on the other hand, participants were unwilling to disclose their lack of confidence to the susceptible robot, suggesting the emergence of reciprocal mechanisms of social influence supporting human-robot collaboration. If a social robot is susceptible to our advice, we lose confidence in it However, robot’s susceptibility does not deteriorate social influence These effects do not appear during interaction with a computer Susceptible robots can promote reciprocity but also hinder social learning
Collapse
Affiliation(s)
- Joshua Zonca
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
- Corresponding author
| | - Anna Folsø
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
| |
Collapse
|
6
|
Flanagan T, Rottman J, Howard LH. Constrained Choice: Children's and Adults' Attribution of Choice to a Humanoid Robot. Cogn Sci 2021; 45:e13043. [PMID: 34606132 DOI: 10.1111/cogs.13043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 08/16/2021] [Accepted: 08/18/2021] [Indexed: 11/29/2022]
Abstract
Young children, like adults, understand that human agents can flexibly choose different actions in different contexts, and they evaluate these agents based on such choices. However, little is known about children's tendencies to attribute the capacity to choose to robots, despite increased contact with robotic agents. In this paper, we compare 5- to 7-year-old children's and adults' attributions of free choice to a robot and to a human child by using a series of tasks measuring agency attribution, action prediction, and choice attribution. In morally neutral scenarios, children ascribed similar levels of free choice to the robot and the human, while adults were more likely to ascribe free choice to the human. For morally relevant scenarios, however, both age groups considered the robot's actions to be more constrained than the human's actions. These findings demonstrate that children and adults hold a nuanced understanding of free choice that is sensitive to both the agent type and constraints within a given scenario.
Collapse
|
7
|
Peter J, Kühne R, Barco A. Can social robots affect children's prosocial behavior? An experimental study on prosocial robot models. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106712] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
8
|
Lee JG, Lee J, Lee D. Cheerful encouragement or careful listening: The dynamics of robot etiquette at Children's different developmental stages. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
9
|
Oliveira R, Arriaga P, Santos FP, Mascarenhas S, Paiva A. Towards prosocial design: A scoping review of the use of robots and virtual agents to trigger prosocial behaviour. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2020.106547] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
10
|
Baratgin J, Dubois-Sage M, Jacquet B, Stilgenbauer JL, Jamet F. Pragmatics in the False-Belief Task: Let the Robot Ask the Question! Front Psychol 2020; 11:593807. [PMID: 33329255 PMCID: PMC7719623 DOI: 10.3389/fpsyg.2020.593807] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 10/28/2020] [Indexed: 11/13/2022] Open
Abstract
The poor performances of typically developing children younger than 4 in the first-order false-belief task "Maxi and the chocolate" is analyzed from the perspective of conversational pragmatics. An ambiguous question asked by an adult experimenter (perceived as a teacher) can receive different interpretations based on a search for relevance, by which children according to their age attribute different intentions to the questioner, within the limits of their own meta-cognitive knowledge. The adult experimenter tells the child the following story of object-transfer: "Maxi puts his chocolate into the green cupboard before going out to play. In his absence, his mother moves the chocolate from the green cupboard to the blue one." The child must then predict where Maxi will pick up the chocolate when he returns. To the child, the question from an adult (a knowledgeable person) may seem surprising and can be understood as a question of his own knowledge of the world, rather than on Maxi's mental representations. In our study, without any modification of the initial task, we disambiguate the context of the question by (1) replacing the adult experimenter with a humanoid robot presented as "ignorant" and "slow" but trying to learn and (2) placing the child in the role of a "mentor" (the knowledgeable person). Sixty-two typical children of 3 years-old completed the first-order false belief task "Maxi and the chocolate," either with a human or with a robot. Results revealed a significantly higher success rate in the robot condition than in the human condition. Thus, young children seem to fail because of the pragmatic difficulty of the first-order task, which causes a difference of interpretation between the young child and the experimenter.
Collapse
Affiliation(s)
- Jean Baratgin
- Laboratoire Cognition Humaine et Artificielle, Université Paris 8, Paris, France
- Probability, Assessment, Reasoning and Inferences Studies (P-A-R-I-S) Association, Paris, France
| | - Marion Dubois-Sage
- Laboratoire Cognition Humaine et Artificielle, Université Paris 8, Paris, France
- Probability, Assessment, Reasoning and Inferences Studies (P-A-R-I-S) Association, Paris, France
| | - Baptiste Jacquet
- Laboratoire Cognition Humaine et Artificielle, Université Paris 8, Paris, France
- Probability, Assessment, Reasoning and Inferences Studies (P-A-R-I-S) Association, Paris, France
| | - Jean-Louis Stilgenbauer
- Laboratoire Cognition Humaine et Artificielle, Université Paris 8, Paris, France
- Probability, Assessment, Reasoning and Inferences Studies (P-A-R-I-S) Association, Paris, France
- Facultés Libres de Philosophie et de Psychologie (IPC), Paris, France
| | - Frank Jamet
- Laboratoire Cognition Humaine et Artificielle, Université Paris 8, Paris, France
- Probability, Assessment, Reasoning and Inferences Studies (P-A-R-I-S) Association, Paris, France
- CY Cergy-Paris Université, ESPE de Versailles, Paris, France
| |
Collapse
|
11
|
Yang JC, Chen SY. An investigation of game behavior in the context of digital game-based learning: An individual difference perspective. COMPUTERS IN HUMAN BEHAVIOR 2020. [DOI: 10.1016/j.chb.2020.106432] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
12
|
Martin DU, MacIntyre MI, Perry C, Clift G, Pedell S, Kaufman J. Young Children's Indiscriminate Helping Behavior Toward a Humanoid Robot. Front Psychol 2020; 11:239. [PMID: 32153463 PMCID: PMC7047927 DOI: 10.3389/fpsyg.2020.00239] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 01/31/2020] [Indexed: 11/13/2022] Open
Abstract
Young children help others in a range of situations, relatively indiscriminate of the characteristics of those they help. Recent results have suggested that young children's helping behavior extends even to humanoid robots. However, it has been unclear how characteristics of robots would influence children's helping behavior. Considering previous findings suggesting that certain robot features influence adults' perception of and their behavior toward robots, the question arises of whether young children's behavior and perception would follow the same principles. The current study investigated whether two key characteristics of a humanoid robot (animate autonomy and friendly expressiveness) would affect children's instrumental helping behavior and their perception of the robot as an animate being. Eighty-two 3-year-old children participated in one of four experimental conditions manipulating a robot's ostensible animate autonomy (high/low) and friendly expressiveness (friendly/neutral). Helping was assessed in an out-of-reach task and animacy ratings were assessed in a post-test interview. Results suggested that both children's helping behavior, as well as their perception of the robot as animate, were unaffected by the robot's characteristics. The findings indicate that young children's helping behavior extends largely indiscriminately across two important characteristics. These results increase our understanding of the development of children's altruistic behavior and animate-inanimate distinctions. Our findings also raise important ethical questions for the field of child-robot interaction.
Collapse
Affiliation(s)
- Dorothea U. Martin
- Swinburne BabyLab, Department of Psychological Sciences, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Madeline I. MacIntyre
- Swinburne BabyLab, Department of Psychological Sciences, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Conrad Perry
- School of Psychology, The University of Adelaide, Adelaide, SA, Australia
| | - Georgia Clift
- Swinburne BabyLab, Department of Psychological Sciences, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Sonja Pedell
- Swinburne Future Self and Design Living Lab, Centre for Design Innovation, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Jordy Kaufman
- Swinburne BabyLab, Department of Psychological Sciences, Swinburne University of Technology, Hawthorn, VIC, Australia
| |
Collapse
|