1
|
Abubshait A, Perez-Osorio J, De Tommaso D, Wykowska A. Temporal interplay between cognitive conflict and attentional markers in social collaboration. Psychophysiology 2024; 61:e14587. [PMID: 38600626 DOI: 10.1111/psyp.14587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 03/28/2024] [Accepted: 03/29/2024] [Indexed: 04/12/2024]
Abstract
Cognitive processes deal with contradictory demands in social contexts. On the one hand, social interactions imply a demand for cooperation, which requires processing social signals, and on the other, demands for selective attention require ignoring irrelevant signals, to avoid overload. We created a task with a humanoid robot displaying irrelevant social signals, imposing conflicting demands on selective attention. Participants interacted with the robot as a team (high social demand; n = 23) or a passive co-actor (low social demand; n = 19). We observed that theta oscillations indexed conflict processing of social signals. Subsequently, alpha oscillations were sensitive to the conflicting social signals and the mode of interaction. These findings suggest that brains have distinct mechanisms for dealing with the complexity of social interaction and that these mechanisms are activated differently depending on the mode of the interaction. Thus, how we process environmental stimuli depends on the beliefs held regarding our social context.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- S4HRI: Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
| | - Jairo Perez-Osorio
- S4HRI: Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
- Cognitive Psychology & Ergonomics, Technische Universität Berlin, Berlin, Germany
| | - Davide De Tommaso
- S4HRI: Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
| | - Agnieszka Wykowska
- S4HRI: Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genova, Italy
| |
Collapse
|
2
|
Chen D, Han Z, Zhang J, Xue L, Liu S. Additive Manufacturing Provides Infinite Possibilities for Self-Sensing Technology. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2400816. [PMID: 38767180 PMCID: PMC11267329 DOI: 10.1002/advs.202400816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/12/2024] [Indexed: 05/22/2024]
Abstract
Integrating sensors and other functional parts in one device can enable a new generation of integrated intelligent devices that can perform self-sensing and monitoring autonomously. Applications include buildings that detect and repair damage, robots that monitor conditions and perform real-time correction and reconstruction, aircraft capable of real-time perception of the internal and external environment, and medical devices and prosthetics with a realistic sense of touch. Although integrating sensors and other functional parts into self-sensing intelligent devices has become increasingly common, additive manufacturing has only been marginally explored. This review focuses on additive manufacturing integrated design, printing equipment, and printable materials and stuctures. The importance of the material, structure, and function of integrated manufacturing are highlighted. The study summarizes current challenges to be addressed and provides suggestions for future development directions.
Collapse
Affiliation(s)
- Daobing Chen
- The Institute of Technological ScienceWuhan UniversitySouth Donghu Road 8Wuhan430072China
| | - Zhiwu Han
- The Key Laboratory of Bionic Engineering (Ministry of Education)Jilin UniversityChangchunJilin130022China
| | - Junqiu Zhang
- The Key Laboratory of Bionic Engineering (Ministry of Education)Jilin UniversityChangchunJilin130022China
| | - Longjian Xue
- School of Power and Mechanical EngineeringWuhan UniversitySouth Donghu Road 8Wuhan430072China
| | - Sheng Liu
- The Institute of Technological ScienceWuhan UniversitySouth Donghu Road 8Wuhan430072China
| |
Collapse
|
3
|
Qin X, Xia X, Ge Z, Liu Y, Yue P. The Design and Control of a Biomimetic Binocular Cooperative Perception System Inspired by the Eye Gaze Mechanism. Biomimetics (Basel) 2024; 9:69. [PMID: 38392115 PMCID: PMC10886948 DOI: 10.3390/biomimetics9020069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 01/15/2024] [Accepted: 01/16/2024] [Indexed: 02/24/2024] Open
Abstract
Research on systems that imitate the gaze function of human eyes is valuable for the development of humanoid eye intelligent perception. However, the existing systems have some limitations, including the redundancy of servo motors, a lack of camera position adjustment components, and the absence of interest-point-driven binocular cooperative motion-control strategies. In response to these challenges, a novel biomimetic binocular cooperative perception system (BBCPS) was designed and its control was realized. Inspired by the gaze mechanism of human eyes, we designed a simple and flexible biomimetic binocular cooperative perception device (BBCPD). Based on a dynamic analysis, the BBCPD was assembled according to the principle of symmetrical distribution around the center. This enhances braking performance and reduces operating energy consumption, as evidenced by the simulation results. Moreover, we crafted an initial position calibration technique that allows for the calibration and adjustment of the camera pose and servo motor zero-position, to ensure that the state of the BBCPD matches the subsequent control method. Following this, a control method for the BBCPS was developed, combining interest point detection with a motion-control strategy. Specifically, we propose a binocular interest-point extraction method based on frequency-tuned and template-matching algorithms for perceiving interest points. To move an interest point to a principal point, we present a binocular cooperative motion-control strategy. The rotation angles of servo motors were calculated based on the pixel difference between the principal point and the interest point, and PID-controlled servo motors were driven in parallel. Finally, real experiments validated the control performance of the BBCPS, demonstrating that the gaze error was less than three pixels.
Collapse
Affiliation(s)
- Xufang Qin
- Key Laboratory of Road Construction Technology and Equipment of MOE, Chang'an University, Xi'an 710064, China
| | - Xiaohua Xia
- Key Laboratory of Road Construction Technology and Equipment of MOE, Chang'an University, Xi'an 710064, China
| | - Zhaokai Ge
- Key Laboratory of Road Construction Technology and Equipment of MOE, Chang'an University, Xi'an 710064, China
| | - Yanhao Liu
- TianQin Research Center for Gravitational Physics and School of Physics and Astronomy, Sun Yat-sen University (Zhuhai Campus), Zhuhai 519082, China
| | - Pengju Yue
- Key Laboratory of Road Construction Technology and Equipment of MOE, Chang'an University, Xi'an 710064, China
| |
Collapse
|
4
|
Parenti L, Belkaid M, Wykowska A. Differences in Social Expectations About Robot Signals and Human Signals. Cogn Sci 2023; 47:e13393. [PMID: 38133602 DOI: 10.1111/cogs.13393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 11/22/2023] [Accepted: 11/27/2023] [Indexed: 12/23/2023]
Abstract
In our daily lives, we are continually involved in decision-making situations, many of which take place in the context of social interaction. Despite the ubiquity of such situations, there remains a gap in our understanding of how decision-making unfolds in social contexts, and how communicative signals, such as social cues and feedback, impact the choices we make. Interestingly, there is a new social context to which humans are recently increasingly more frequently exposed-social interaction with not only other humans but also artificial agents, such as robots or avatars. Given these new technological developments, it is of great interest to address the question of whether-and in what way-social signals exhibited by non-human agents influence decision-making. The present study aimed to examine whether robot non-verbal communicative behavior has an effect on human decision-making. To this end, we implemented a two-alternative-choice task where participants were to guess which of two presented cups was covering a ball. This game was an adaptation of a "Shell Game." A robot avatar acted as a game partner producing social cues and feedback. We manipulated robot's cues (pointing toward one of the cups) before the participant's decision and the robot's feedback ("thumb up" or no feedback) after the decision. We found that participants were slower (compared to other conditions) when cues were mostly invalid and the robot reacted positively to wins. We argue that this was due to the incongruence of the signals (cue vs. feedback), and thus violation of expectations. In sum, our findings show that incongruence in pre- and post-decision social signals from a robot significantly influences task performance, highlighting the importance of understanding expectations toward social robots for effective human-robot interactions.
Collapse
Affiliation(s)
- Lorenzo Parenti
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- Department of Psychology, University of Turin
| | - Marwen Belkaid
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- ETIS UMR 8051, CY Cergy Paris Université, ENSEA, CNRS
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
| |
Collapse
|
5
|
Abubshait A, Kompatsiari K, Cardellicchio P, Vescovo E, De Tommaso D, Fadiga L, D'Ausilio A, Wykowska A. Modulatory Effects of Communicative Gaze on Attentional Orienting Are Driven by Dorsomedial Prefrontal Cortex but Not Right Temporoparietal Junction. J Cogn Neurosci 2023; 35:1670-1680. [PMID: 37432740 DOI: 10.1162/jocn_a_02032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Communicative gaze (e.g., mutual or averted) has been shown to affect attentional orienting. However, no study to date has clearly separated the neural basis of the pure social component that modulates attentional orienting in response to communicative gaze from other processes that might be a combination of attentional and social effects. We used TMS to isolate the purely social effects of communicative gaze on attentional orienting. Participants completed a gaze-cueing task with a humanoid robot who engaged either in mutual or in averted gaze before shifting its gaze. Before the task, participants received either sham stimulation (baseline), stimulation of right TPJ (rTPJ), or dorsomedial prefrontal cortex (dmPFC). Results showed, as expected, that communicative gaze affected attentional orienting in baseline condition. This effect was not evident for rTPJ stimulation. Interestingly, stimulation to rTPJ also canceled out attentional orienting altogether. On the other hand, dmPFC stimulation eliminated the socially driven difference in attention orienting between the two gaze conditions while maintaining the basic general attentional orienting effect. Thus, our results allowed for separation of the pure social effect of communicative gaze on attentional orienting from other processes that are a combination of social and generic attentional components.
Collapse
Affiliation(s)
| | | | | | - Enrico Vescovo
- Istituto Italiano di Tecnologia, Ferrara, Italy
- Universita di Ferrara, Italy
| | | | - Luciano Fadiga
- Istituto Italiano di Tecnologia, Ferrara, Italy
- Universita di Ferrara, Italy
| | | | | |
Collapse
|
6
|
Fu D, Abawi F, Carneiro H, Kerzel M, Chen Z, Strahl E, Liu X, Wermter S. A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution. Int J Soc Robot 2023; 15:1-16. [PMID: 37359433 PMCID: PMC10067521 DOI: 10.1007/s12369-023-00993-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2023] [Indexed: 04/05/2023]
Abstract
To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar's dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans.
Collapse
Affiliation(s)
- Di Fu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Fares Abawi
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Hugo Carneiro
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Matthias Kerzel
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Ziwei Chen
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Erik Strahl
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Xun Liu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Stefan Wermter
- Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
7
|
Gain-loss separability in human- but not computer-based changes of mind. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
8
|
Li M, Guo F, Wang X, Chen J, Ham J. Effects of robot gaze and voice human-likeness on users’ subjective perception, visual attention, and cerebral activity in voice conversations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
9
|
Being watched by a humanoid robot and a human: Effects on affect-related psychophysiological responses. Biol Psychol 2022; 175:108451. [DOI: 10.1016/j.biopsycho.2022.108451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 10/27/2022] [Accepted: 10/31/2022] [Indexed: 11/06/2022]
|
10
|
A Massage Area Positioning Algorithm for Intelligent Massage System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7678516. [PMID: 35965757 PMCID: PMC9371831 DOI: 10.1155/2022/7678516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 07/13/2022] [Indexed: 12/03/2022]
Abstract
A growing number of studies have been conducted over the past few years on the positioning of daily massage robots. However, most methods used for research have low interactivity, and a systematic method should be designed for accurate and intelligent positioning, thus compromising usability and user experience. In this study, a massage positioning algorithm with online learning capabilities is presented. The algorithm has the following main innovations: (1) autonomous massage localization can be achieved by gaining insights into natural human-machine interaction behavior and (2) online learning of user massage habits can be achieved by integrating recursive Bayesian ideas. As revealed by the experimental results, combining natural human-computer interaction and online learning with massage positioning is capable of helping people get rid of positioning aids, reducing their psychological and cognitive load, and achieving a more desirable positioning effect. Furthermore, the results of the analysis of user evaluations further verify the effectiveness of the algorithm.
Collapse
|
11
|
Enhancing the Sense of Attention from an Assistance Mobile Robot by Improving Eye-Gaze Contact from Its Iconic Face Displayed on a Flat Screen. SENSORS 2022; 22:s22114282. [PMID: 35684903 PMCID: PMC9185237 DOI: 10.3390/s22114282] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/27/2022] [Accepted: 05/31/2022] [Indexed: 02/04/2023]
Abstract
One direct way to express the sense of attention in a human interaction is through the gaze. This paper presents the enhancement of the sense of attention from the face of a human-sized mobile robot during an interaction. This mobile robot was designed as an assistance mobile robot and uses a flat screen at the top of the robot to display an iconic (simplified) face with big round eyes and a single line as a mouth. The implementation of eye-gaze contact from this iconic face is a problem because of the difficulty of simulating real 3D spherical eyes in a 2D image considering the perspective of the person interacting with the mobile robot. The perception of eye-gaze contact has been improved by manually calibrating the gaze of the robot relative to the location of the face of the person interacting with the robot. The sense of attention has been further enhanced by implementing cyclic face explorations with saccades in the gaze and by performing blinking and small movements of the mouth.
Collapse
|