1
|
Onnasch L, Schweidler P, Schmidt H. The potential of robot eyes as predictive cues in HRI-an eye-tracking study. Front Robot AI 2023; 10:1178433. [PMID: 37575370 PMCID: PMC10416260 DOI: 10.3389/frobt.2023.1178433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 07/03/2023] [Indexed: 08/15/2023] Open
Abstract
Robots currently provide only a limited amount of information about their future movements to human collaborators. In human interaction, communication through gaze can be helpful by intuitively directing attention to specific targets. Whether and how this mechanism could benefit the interaction with robots and how a design of predictive robot eyes in general should look like is not well understood. In a between-subjects design, four different types of eyes were therefore compared with regard to their attention directing potential: a pair of arrows, human eyes, and two anthropomorphic robot eye designs. For this purpose, 39 subjects performed a novel, screen-based gaze cueing task in the laboratory. Participants' attention was measured using manual responses and eye-tracking. Information on the perception of the tested cues was provided through additional subjective measures. All eye models were overall easy to read and were able to direct participants' attention. The anthropomorphic robot eyes were most efficient at shifting participants' attention which was revealed by faster manual and saccadic reaction times. In addition, a robot equipped with anthropomorphic eyes was perceived as being more competent. Abstract anthropomorphic robot eyes therefore seem to trigger a reflexive reallocation of attention. This points to a social and automatic processing of such artificial stimuli.
Collapse
|
2
|
Robotic Gaze Responsiveness in Multiparty Teamwork. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00955-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
3
|
Bowsher-Murray C, Gerson S, von dem Hagen E, Jones CRG. The Components of Interpersonal Synchrony in the Typical Population and in Autism: A Conceptual Analysis. Front Psychol 2022; 13:897015. [PMID: 35734455 PMCID: PMC9208202 DOI: 10.3389/fpsyg.2022.897015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/16/2022] [Indexed: 01/18/2023] Open
Abstract
Interpersonal synchrony – the tendency for social partners to temporally co-ordinate their behaviour when interacting – is a ubiquitous feature of social interactions. Synchronous interactions play a key role in development, and promote social bonding and a range of pro-social behavioural outcomes across the lifespan. The process of achieving and maintaining interpersonal synchrony is highly complex, with inputs required from across perceptual, temporal, motor, and socio-cognitive domains. In this conceptual analysis, we synthesise evidence from across these domains to establish the key components underpinning successful non-verbal interpersonal synchrony, how such processes interact, and factors that may moderate their operation. We also consider emerging evidence that interpersonal synchrony is reduced in autistic populations. We use our account of the components contributing to interpersonal synchrony in the typical population to identify potential points of divergence in interpersonal synchrony in autism. The relationship between interpersonal synchrony and broader aspects of social communication in autism are also considered, together with implications for future research.
Collapse
Affiliation(s)
- Claire Bowsher-Murray
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
- *Correspondence: Claire Bowsher-Murray,
| | - Sarah Gerson
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Elisabeth von dem Hagen
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Brain Imaging Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Catherine R. G. Jones
- Wales Autism Research Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Cardiff University Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
- Catherine R. G. Jones,
| |
Collapse
|
4
|
Roesler E, Manzey D, Onnasch L. A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Sci Robot 2021; 6:eabj5425. [PMID: 34516745 DOI: 10.1126/scirobotics.abj5425] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
The application of anthropomorphic design features is widely assumed to facilitate human-robot interaction (HRI). However, a considerable number of study results point in the opposite direction. There is currently no comprehensive common ground on the circumstances under which anthropomorphism promotes interaction with robots. Our meta-analysis aims to close this gap. A total of 4856 abstracts were scanned. After an extensive evaluation, 78 studies involving around 6000 participants and 187 effect sizes were included in this meta-analysis. The majority of the studies addressed effects on perceptual aspects of robots. In addition, effects on attitudinal, affective, and behavioral aspects were also investigated. Overall, a medium positive effect size was found, indicating a beneficial effect of anthropomorphic design features on human-related outcomes. However, closer scrutiny of the lowest variable level revealed no positive effect for perceived safety, empathy, and task performance. Moreover, the analysis suggests that positive effects of anthropomorphism depend heavily on various moderators. For example, anthropomorphism was in contrast to other fields of application, constantly facilitating social HRI. The results of this analysis provide insights into how design features can be used to improve the quality of HRI. Moreover, they reveal areas in which more research is needed before any clear conclusions about the effects of anthropomorphic robot design can be drawn.
Collapse
Affiliation(s)
- E Roesler
- Technische Universität Berlin, Berlin, Germany
| | - D Manzey
- Technische Universität Berlin, Berlin, Germany
| | - L Onnasch
- Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
5
|
Optimizing Android Facial Expressions Using Genetic Algorithms. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9163379] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Because the internal structure, degree of freedom, skin control position and range of the android face are different, it is very difficult to generate facial expressions by applying existing facial expression generation methods. In addition, facial expressions differ among robots because they are designed subjectively. To address these problems, we developed a system that can automatically generate robot facial expressions by combining an android, a recognizer capable of classifying facial expressions and a genetic algorithm. We have developed two types (older men and young women) of android face robots that can simulate human skin movements. We selected 16 control positions to generate the facial expressions of these robots. The expressions were generated by combining the displacements of 16 motors. A chromosome comprising 16 genes (motor displacements) was generated by applying real-coded genetic algorithms; subsequently, it was used to generate robot facial expressions. To determine the fitness of the generated facial expressions, expression intensity was evaluated through a facial expression recognizer. The proposed system was used to generate six facial expressions (angry, disgust, fear, happy, sad, surprised); the results confirmed that they were more appropriate than manually generated facial expressions.
Collapse
|
6
|
Bishop L, Cancino-Chacón C, Goebl W. Eye gaze as a means of giving and seeking information during musical interaction. Conscious Cogn 2019; 68:73-96. [PMID: 30660927 PMCID: PMC6374286 DOI: 10.1016/j.concog.2019.01.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 01/04/2019] [Accepted: 01/04/2019] [Indexed: 11/23/2022]
Abstract
During skilled music ensemble performance, a multi-layered network of interaction processes allows musicians to negotiate common interpretations of ambiguously-notated music in real-time. This study investigated the conditions that encourage visual interaction during duo performance. Duos recorded performances of a new piece before and after a period of rehearsal. Mobile eye tracking and motion capture were used in combination to map uni- and bidirectional eye gaze patterns. Musicians watched each other more during temporally-unstable passages than during regularly-timed passages. They also watched each other more after rehearsal than before. Duo musicians may seek visual interaction with each other primarily, but not exclusively, when coordination is threatened by temporal instability. Visual interaction increases as musicians become familiar with the piece, suggesting that they visually monitor each other once a shared interpretation of the piece is established. Visual monitoring of co-performers' movements and attention may facilitate feelings of engagement and high-level creative collaboration.
Collapse
Affiliation(s)
- Laura Bishop
- Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria.
| | - Carlos Cancino-Chacón
- Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria; Institute of Computational Perception, Johannes Kepler University Linz, Austria
| | - Werner Goebl
- Dept. of Music Acoustics, University of Music and Performing Arts Vienna, Austria
| |
Collapse
|
7
|
Dixon P, Glover S. Solo versus joint bimanual coordination. Exp Brain Res 2018; 237:273-287. [PMID: 30390100 DOI: 10.1007/s00221-018-5420-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 10/27/2018] [Indexed: 11/29/2022]
Abstract
Understanding the differences between solo and joint action control is an important goal in psychology. The present study represented a novel approach in which participants performed a bimanual finger oscillation task, either alone or in pairs. It was hypothesized that performance of this task relies heavily on attention and utilizes two independent processes that differentially affect solo and joint performance. One process attempts to align the fingers correctly regardless of oscillation speed, and this is reflected in an alignment error evident even at slow oscillations. A second process attempts to minimize the time lag between the fingers as the oscillation speed increases, reflected in a temporal error indexed by the rate of error increase with increasing movement speed. In three experiments, alignment and temporal error in the finger oscillation task were compared in solo and joint actors. Overall, solo actors had much lower alignment error than joint actors. Solo actors also showed a reduction in temporal error when the fingers moved in a symmetrical rather than parallel fashion, consistent with previous research showing an increase in error with increasing movement speed. However, the effect of symmetry on temporal error did not occur with joint actors. Similar results were found with one hand inverted, suggesting that the pattern of results was not due to the use of homologous muscles. To test the role of visual feedback, we examined the effect of denying visual feedback to one of the actors in the joint condition. Paradoxically, under these conditions, there was lower temporal error in the symmetrical condition. These results are interpreted in terms of the organization of solo versus joint actions and the control of bimanual tasks in general.
Collapse
Affiliation(s)
- Peter Dixon
- Department of Psychology, University of Alberta, Edmonton, AB, T6G 2E9, Canada.
| | | |
Collapse
|
8
|
Brezis RS, Noy L, Alony T, Gotlieb R, Cohen R, Golland Y, Levit-Binnun N. Patterns of Joint Improvisation in Adults with Autism Spectrum Disorder. Front Psychol 2017; 8:1790. [PMID: 29114236 PMCID: PMC5660713 DOI: 10.3389/fpsyg.2017.01790] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 09/26/2017] [Indexed: 11/13/2022] Open
Abstract
Recent research on autism spectrum disorders (ASDs) suggests that individuals with autism may have a basic deficit in synchronizing with others, and that this difficulty may lead to more complex social and communicative deficits. Here, we examined synchronization during an open-ended joint improvisation (JI) paradigm, called the mirror game (MG). In the MG, two players take turns leading, following, and jointly improvising motion using two handles set on parallel tracks, while their motion tracks are recorded with high temporal and spatial resolution. A series of previous studies have shown that players in the MG attain moments of highly synchronized co-confident (CC) motion, in which there is no typical kinematic pattern of leader and reactive follower. It has been suggested that during these moments players act as a coupled unit and feel high levels of connectedness. Here, we aimed to assess whether participants with ASD are capable of attaining CC, and whether their MG performance relates to broader motor and social skills. We found that participants with ASD (n = 34) can indeed attain CC moments when playing with an expert improviser, though their performance was attenuated in several ways, compared to typically developing (TD) participants (n = 35). Specifically, ASD participants had lower rates of CC, compared with TD participants, which was most pronounced during the following rounds. In addition, the duration of their CC segments was shorter, across all rounds. When controlling for participants' motor skills (both on the MG console, and more broadly) some of the variability in MG performance was explained, but group differences remained. ASD participants' alexithymia further correlated with their difficulty following another's lead; though other social skills did not relate to MG performance. Participants' subjective reports of the game suggest that other cognitive and emotional factors, such as attention, motivation, and reward-processing, which were not directly measured in the experiment, may impact their performance. Together, these results show that ASD participants can attain moments of high motor synchronization with an expert improviser, even during an open-ended task. Future studies should examine the ways in which these skills may be further harnessed in clinical settings.
Collapse
Affiliation(s)
- Rachel-Shlomit Brezis
- Sagol Center for Brain and Mind, Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| | - Lior Noy
- Theatre Lab, Weizmann Institute of Science, Rehovot, Israel
| | - Tali Alony
- Sagol Center for Brain and Mind, Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| | - Rachel Gotlieb
- Sagol Center for Brain and Mind, Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| | - Rachel Cohen
- Sagol Center for Brain and Mind, Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| | - Yulia Golland
- Sagol Center for Brain and Mind, Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| | - Nava Levit-Binnun
- Sagol Center for Brain and Mind, Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| |
Collapse
|
9
|
Bao Y, Cuijpers RH. On the Imitation of Goal Directed Movements of a Humanoid Robot. Int J Soc Robot 2017. [DOI: 10.1007/s12369-017-0417-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
10
|
Noy L, Weiser N, Friedman J. Synchrony in Joint Action Is Directed by Each Participant's Motor Control System. Front Psychol 2017; 8:531. [PMID: 28443047 PMCID: PMC5385352 DOI: 10.3389/fpsyg.2017.00531] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Accepted: 03/23/2017] [Indexed: 11/16/2022] Open
Abstract
In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader's motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems.
Collapse
Affiliation(s)
- Lior Noy
- Department of Molecular Cell Biology, Weizmann Institute of ScienceRehovot, Israel
- The Theatre Lab, Weizmann Institute of ScienceRehovot, Israel
| | - Netta Weiser
- Sagol School of Neuroscience, Tel Aviv UniversityTel Aviv, Israel
| | - Jason Friedman
- Sagol School of Neuroscience, Tel Aviv UniversityTel Aviv, Israel
- Department of Physical Therapy, Sackler Faculty of Medicine, Tel Aviv UniversityTel Aviv, Israel
| |
Collapse
|
11
|
Raffard S, Bortolon C, Khoramshahi M, Salesse RN, Burca M, Marin L, Bardy BG, Billard A, Macioce V, Capdevielle D. Humanoid robots versus humans: How is emotional valence of facial expressions recognized by individuals with schizophrenia? An exploratory study. Schizophr Res 2016; 176:506-513. [PMID: 27293136 DOI: 10.1016/j.schres.2016.06.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Revised: 05/31/2016] [Accepted: 06/01/2016] [Indexed: 11/17/2022]
Abstract
BACKGROUND The use of humanoid robots to play a therapeutic role in helping individuals with social disorders such as autism is a newly emerging field, but remains unexplored in schizophrenia. As the ability for robots to convey emotion appear of fundamental importance for human-robot interactions, we aimed to evaluate how schizophrenia patients recognize positive and negative facial emotions displayed by a humanoid robot. METHODS We included 21 schizophrenia outpatients and 17 healthy participants. In a reaction time task, they were shown photographs of human faces and of a humanoid robot (iCub) expressing either positive or negative emotions, as well as a non-social stimulus. Patients' symptomatology, mind perception, reaction time and number of correct answers were evaluated. RESULTS Results indicated that patients and controls recognized better and faster the emotional valence of facial expressions expressed by humans than by the robot. Participants were faster when responding to positive compared to negative human faces and inversely were faster for negative compared to positive robot faces. Importantly, participants performed worse when they perceived iCub as being capable of experiencing things (experience subscale of the mind perception questionnaire). In schizophrenia patients, negative correlations emerged between negative symptoms and both robot's and human's negative face accuracy. CONCLUSIONS Individuals do not respond similarly to human facial emotion and to non-anthropomorphic emotional signals. Humanoid robots have the potential to convey emotions to patients with schizophrenia, but their appearance seems of major importance for human-robot interactions.
Collapse
Affiliation(s)
- Stéphane Raffard
- Epsylon Laboratory Dynamic of Human Abilities & Health Behaviors, University of Montpellier 3, Montpellier, France; University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France
| | - Catherine Bortolon
- Epsylon Laboratory Dynamic of Human Abilities & Health Behaviors, University of Montpellier 3, Montpellier, France; University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France
| | - Mahdi Khoramshahi
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Robin N Salesse
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090 Montpellier, France
| | - Marianna Burca
- Epsylon Laboratory Dynamic of Human Abilities & Health Behaviors, University of Montpellier 3, Montpellier, France
| | - Ludovic Marin
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090 Montpellier, France
| | - Benoit G Bardy
- EuroMov, Montpellier University, 700 Avenue du Pic Saint-Loup, 34090 Montpellier, France; Institut Universitaire de France, France
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
| | - Valérie Macioce
- Clinical & Epidemiological Research Unit, CHU, Montpellier, France
| | - Delphine Capdevielle
- University Department of Adult Psychiatry, Hôpital de la Colombière, CHRU Montpellier, Montpellier University, Montpellier, France; INSERM U-1061, Montpellier, France
| |
Collapse
|