1
|
Whole-Body Dynamics for Humanoid Robot Fall Protection Trajectory Generation with Wall Support. Biomimetics (Basel) 2024; 9:245. [PMID: 38667256 PMCID: PMC11048354 DOI: 10.3390/biomimetics9040245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 04/13/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
When humanoid robots work in human environments, they are prone to falling. However, when there are objects around that can be utilized, humanoid robots can leverage them to achieve balance. To address this issue, this paper established the state equation of a robot using a variable height-inverted pendulum model and implemented online trajectory optimization using model predictive control. For the arms' optimal joint angles during movement, this paper took the distributed polygon method to calculate the arm postures. To ensure that the robot reached the target position smoothly and rapidly during its motion, this paper adopts a whole-body motion control approach, establishing a cost function for multi-objective constraints on the robot's movement. These constraints include whole-body dynamics, center of mass constraints, arm's end effector constraints, friction constraints, and center of pressure constraints. In the simulation, four sets of methods were compared, and the experimental results indicate that compared to free fall motion, adopting the method proposed in this paper reduces the maximum acceleration of the robot when it touches the wall to 69.1 m/s2, effectively reducing the impact force upon landing. Finally, in the actual experiment, we positioned the robot 0.85 m away from the wall and applied a forward pushing force. We observed that the robot could stably land on the wall, and the impact force was within the range acceptable to the robot, confirming the practical effectiveness of the proposed method.
Collapse
|
2
|
Curriculum-based humanoid robot identification using large-scale human motion database. Front Robot AI 2023; 10:1282299. [PMID: 38099007 PMCID: PMC10720581 DOI: 10.3389/frobt.2023.1282299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 11/13/2023] [Indexed: 12/17/2023] Open
Abstract
Identifying an accurate dynamics model remains challenging for humanoid robots. The difficulty is mainly due to the following two points. First, a good initial model is required to evaluate the feasibility of motions for data acquisition. Second, a highly nonlinear optimization problem needs to be solved to design movements to acquire the identification data. To cope with the first point, in this paper, we propose a curriculum of identification to gradually learn an accurate dynamics model from an unreliable initial model. For the second point, we propose using a large-scale human motion database to efficiently design the humanoid movements for the parameter identification. The contribution of our study is developing a humanoid identification method that does not require the good initial model and does not need to solve the highly nonlinear optimization problem. We showed that our curriculum-based approach was able to more efficiently identify humanoid model parameters than a method that just randomly picked reference motions for identification. We evaluated our proposed method in a simulation experiment and demonstrated that our curriculum was led to obtain a wide variety of motion data for efficient parameter estimation. Consequently, our approach successfully identified an accurate model of an 18-DoF, simulated upper-body humanoid robot.
Collapse
|
3
|
Joint Reconfiguration after Failure for Performing Emblematic Gestures in Humanoid Receptionist Robot. SENSORS (BASEL, SWITZERLAND) 2023; 23:9277. [PMID: 38005663 PMCID: PMC10675268 DOI: 10.3390/s23229277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 11/14/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
This study proposed a strategy for a quick fault recovery response when an actuator failure problem occurred while a humanoid robot with 7-DOF anthropomorphic arms was performing a task with upper body motion. The objective of this study was to develop an algorithm for joint reconfiguration of the receptionist robot called Namo so that the robot can still perform a set of emblematic gestures if an actuator fails or is damaged. We proposed a gesture similarity measurement to be used as an objective function and used bio-inspired artificial intelligence methods, including a genetic algorithm, a bacteria foraging optimization algorithm, and an artificial bee colony, to determine good solutions for joint reconfiguration. When an actuator fails, the failed joint will be locked at the average angle calculated from all emblematic gestures. We used grid search to determine suitable parameter sets for each method before making a comparison of their performance. The results showed that bio-inspired artificial intelligence methods could successfully suggest reconfigured gestures after joint motor failure within 1 s. After 100 repetitions, BFOA and ABC returned the best-reconfigured gestures; there was no statistical difference. However, ABC yielded more reliable reconfigured gestures; there was significantly less interquartile range among the results than BFOA. The joint reconfiguration method was demonstrated for all possible joint failure conditions. The results showed that the proposed method could determine good reconfigured gestures under given time constraints; hence, it could be used for joint failure recovery in real applications.
Collapse
|
4
|
Control and evaluation of a humanoid robot with rolling contact joints on its lower body. Front Robot AI 2023; 10:1164660. [PMID: 37908754 PMCID: PMC10613887 DOI: 10.3389/frobt.2023.1164660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 09/26/2023] [Indexed: 11/02/2023] Open
Abstract
In this paper, we introduce a new teen-sized humanoid platform dubbed DRACO 3, custom-built by Apptronik and altered for practical use by the Human Centered Robotics Laboratory at The University of Texas at Austin. The form factor of DRACO 3 is such that it can operate safely in human environments while reaching objects at human heights. To approximate the range of motion of humans, this robot features proximal actuation and mechanical artifacts to provide a high range of hip, knee, and ankle motions. In particular, rolling contact mechanisms on the lower body are incorporated using a proximal actuation principle to provide an extensive vertical pose workspace. To enable DRACO 3 to perform dexterous tasks while dealing with these complex transmissions, we introduce a novel whole-body controller (WBC) incorporating internal constraints to model the rolling motion behavior. In addition, details of our WBC for DRACO 3 are presented with an emphasis on practical points for hardware implementation. We perform a design analysis of DRACO 3, as well as empirical evaluations under the lens of the Centroidal Inertia Isotropy (CII) design metric. Lastly, we experimentally validate our design and controller by testing center of mass (CoM) balancing, one-leg balancing, and stepping-in-place behaviors.
Collapse
|
5
|
Editorial: Emerging technologies for assistive robotics: current challenges and perspectives. Front Robot AI 2023; 10:1288360. [PMID: 37881772 PMCID: PMC10597713 DOI: 10.3389/frobt.2023.1288360] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/26/2023] [Indexed: 10/27/2023] Open
|
6
|
Biomimetic Approaches for Human Arm Motion Generation: Literature Review and Future Directions. SENSORS (BASEL, SWITZERLAND) 2023; 23:3912. [PMID: 37112253 PMCID: PMC10143908 DOI: 10.3390/s23083912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 03/25/2023] [Accepted: 04/10/2023] [Indexed: 06/19/2023]
Abstract
In recent years, numerous studies have been conducted to analyze how humans subconsciously optimize various performance criteria while performing a particular task, which has led to the development of robots that are capable of performing tasks with a similar level of efficiency as humans. The complexity of the human body has led researchers to create a framework for robot motion planning to recreate those motions in robotic systems using various redundancy resolution methods. This study conducts a thorough analysis of the relevant literature to provide a detailed exploration of the different redundancy resolution methodologies used in motion generation for mimicking human motion. The studies are investigated and categorized according to the study methodology and various redundancy resolution methods. An examination of the literature revealed a strong trend toward formulating intrinsic strategies that govern human movement through machine learning and artificial intelligence. Subsequently, the paper critically evaluates the existing approaches and highlights their limitations. It also identifies the potential research areas that hold promise for future investigations.
Collapse
|
7
|
Artificial scaffolding: Augmenting social cognition by means of robot technology. Autism Res 2023; 16:997-1008. [PMID: 36847354 DOI: 10.1002/aur.2906] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 02/11/2023] [Indexed: 03/01/2023]
Abstract
The concept of scaffolding refers to the support that the environment provides in the acquisition and consolidation of new abilities. Technological advancements allow for support in the acquisition of cognitive capabilities, such as second language acquisition using simple smartphone applications There is, however, one domain of cognition that has been scarcely addressed in the context of technologically assisted scaffolding: social cognition. We explored the possibility of supporting the acquisition of social competencies of a group of children with autism spectrum disorder engaged in a rehabilitation program (age = 5.8 ± 1.14, 10 females, 33 males) by designing two robot-assisted training protocols tailored to Theory of Mind competencies. One protocol was performed with a humanoid robot and the other (control) with a non-anthropomorphic robot. We analyzed changes in NEPSY-II scores before and after the training using mixed effects models. Our results showed that activities with the humanoid significantly improved NEPSY-II scores on the ToM scale. We claim that the motor repertoire of humanoids makes them ideal platforms for artificial scaffolding of social skills in individuals with autism, as they can evoke similar social mechanisms to those elicited in human-human interaction, without providing the same social pressure that another human might exert.
Collapse
|
8
|
Turn-Taking Mechanisms in Imitative Interaction: Robotic Social Interaction Based on the Free Energy Principle. ENTROPY (BASEL, SWITZERLAND) 2023; 25:263. [PMID: 36832633 PMCID: PMC9955692 DOI: 10.3390/e25020263] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 01/11/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
This study explains how the leader-follower relationship and turn-taking could develop in a dyadic imitative interaction by conducting robotic simulation experiments based on the free energy principle. Our prior study showed that introducing a parameter during the model training phase can determine leader and follower roles for subsequent imitative interactions. The parameter is defined as w, the so-called meta-prior, and is a weighting factor used to regulate the complexity term versus the accuracy term when minimizing the free energy. This can be read as sensory attenuation, in which the robot's prior beliefs about action are less sensitive to sensory evidence. The current extended study examines the possibility that the leader-follower relationship shifts depending on changes in w during the interaction phase. We identified a phase space structure with three distinct types of behavioral coordination using comprehensive simulation experiments with sweeps of w of both robots during the interaction. Ignoring behavior in which the robots follow their own intention was observed in the region in which both ws were set to large values. One robot leading, followed by the other robot was observed when one w was set larger and the other was set smaller. Spontaneous, random turn-taking between the leader and the follower was observed when both ws were set at smaller or intermediate values. Finally, we examined a case of slowly oscillating w in anti-phase between the two agents during the interaction. The simulation experiment resulted in turn-taking in which the leader-follower relationship switched during determined sequences, accompanied by periodic shifts of ws. An analysis using transfer entropy found that the direction of information flow between the two agents also shifted along with turn-taking. Herein, we discuss qualitative differences between random/spontaneous turn-taking and agreed-upon sequential turn-taking by reviewing both synthetic and empirical studies.
Collapse
|
9
|
Design and Performance Analysis of LARMbot Torso V1. MICROMACHINES 2022; 13:1548. [PMID: 36144171 PMCID: PMC9502744 DOI: 10.3390/mi13091548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/13/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
In this paper, laboratory experiments of LARMbot torso V1 are reported in the third mode, thereby providing a testing characterization. Sensors were used to measure parameters including the contact force between the shoulder and cables, linear acceleration, angles of the torso body, and power consumption. The results showed that the LARMbot torso V1 can bend successfully to the desired angles, and that it is able to complete a full motion smoothly. The LARMbot torso V1 can mimic human-like motiaons. Based on our analysis of the test results, improvements are suggested, and new designs are considered.
Collapse
|
10
|
Facing the FACS-Using AI to Evaluate and Control Facial Action Units in Humanoid Robot Face Development. Front Robot AI 2022; 9:887645. [PMID: 35774595 PMCID: PMC9237251 DOI: 10.3389/frobt.2022.887645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 05/11/2022] [Indexed: 11/13/2022] Open
Abstract
This paper presents a new approach for evaluating and controlling expressive humanoid robotic faces using open-source computer vision and machine learning methods. Existing research in Human-Robot Interaction lacks flexible and simple tools that are scalable for evaluating and controlling various robotic faces; thus, our goal is to demonstrate the use of readily available AI-based solutions to support the process. We use a newly developed humanoid robot prototype intended for medical training applications as a case example. The approach automatically captures the robot’s facial action units through a webcam during random motion, which are components traditionally used to describe facial muscle movements in humans. Instead of manipulating the actuators individually or training the robot to express specific emotions, we propose using action units as a means for controlling the robotic face, which enables a multitude of ways to generate dynamic motion, expressions, and behavior. The range of action units achieved by the robot is thus analyzed to discover its expressive capabilities and limitations and to develop a control model by correlating action units to actuation parameters. Because the approach is not dependent on specific facial attributes or actuation capabilities, it can be used for different designs and continuously inform the development process. In healthcare training applications, our goal is to establish a prerequisite of expressive capabilities of humanoid robots bounded by industrial and medical design constraints. Furthermore, to mediate human interpretation and thus enable decision-making based on observed cognitive, emotional, and expressive cues, our approach aims to find the minimum viable expressive capabilities of the robot without having to optimize for realism. The results from our case example demonstrate the flexibility and efficiency of the presented AI-based solutions to support the development of humanoid facial robots.
Collapse
|
11
|
Analysis and control of a running spring-mass model with a trunk based on virtual pendulum concept. BIOINSPIRATION & BIOMIMETICS 2022; 17:046009. [PMID: 35523159 DOI: 10.1088/1748-3190/ac6d97] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 05/06/2022] [Indexed: 06/14/2023]
Abstract
The spring-loaded inverted pendulum model has been one of the most studied conceptual models in the locomotion community. Even though it can adequately explain the center of mass trajectories of numerous legged animals, it remains insufficient in template-based control of complex robot platforms, being unable to capture additional dynamic characteristics of locomotion exhibited in additional degrees of freedom such as trunk pitch oscillations. In fact, analysis of trunk behavior during locomotion has been one of the motivations behind studying the virtual pivot point (VPP) concept, with biological inspiration and basis for both natural and synthetic systems with non-negligible trunk dynamics. This study first presents a comprehensive analysis of the VPP concept for planar running behaviors, followed by a systematic study of the existence and characteristics of periodic solutions. In particular, we investigate how periodic solutions depend on model control parameters and compare them based on stability and energetic cost. We then develop a feedback controller that can stabilize system dynamics around its periodic solutions and evaluate performance as compared to a previously introduced controller from the literature. We demonstrate the effectiveness of both controllers and find that the proposed control scheme creates larger basins of attraction with minor degradation in convergence speed. In conclusion, this study shows that the VPP concept, in conjunction with the proposed controller, could be beneficial in designing and controlling legged robots capable of running with non-trivial upper body dynamics. Our systematic analysis of periodic solutions arising from the use of the VPP concept is also an important step towards a more formal basis for comparisons of the VPP concept with bio-locomotion.
Collapse
|
12
|
IoT and AI-Based Application for Automatic Interpretation of the Affective State of Children Diagnosed with Autism. SENSORS 2022; 22:s22072528. [PMID: 35408139 PMCID: PMC9003434 DOI: 10.3390/s22072528] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/17/2022] [Accepted: 03/21/2022] [Indexed: 02/04/2023]
Abstract
In the context in which it was demonstrated that humanoid robots are efficient in helping children diagnosed with autism in exploring their affective state, this paper underlines and proves the efficiency of a previously developed machine learning-based mobile application called PandaSays, which was improved and integrated with an Alpha 1 Pro robot, and discusses performance evaluations using deep convolutional neural networks and residual neural networks. The model trained with MobileNet convolutional neural network had an accuracy of 56.25%, performing better than ResNet50 and VGG16. A strategy for commanding the Alpha 1 Pro robot without its native application was also established and a robot module was developed that includes the communication protocols with the application PandaSays. The output of the machine learning algorithm involved in PandaSays is sent to the humanoid robot to execute some actions as singing, dancing, and so on. Alpha 1 Pro has its own programming language—Blockly—and, in order to give the robot specific commands, Bluetooth programming is used, with the help of a Raspberry Pi. Therefore, the robot motions can be controlled based on the corresponding protocols. The tests have proved the robustness of the whole solution.
Collapse
|
13
|
ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition. Front Robot AI 2022; 8:730317. [PMID: 35059440 PMCID: PMC8764256 DOI: 10.3389/frobt.2021.730317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots' joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.
Collapse
|
14
|
Versatile Locomotion Planning and Control for Humanoid Robots. Front Robot AI 2021; 8:712239. [PMID: 34485391 PMCID: PMC8414409 DOI: 10.3389/frobt.2021.712239] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 07/28/2021] [Indexed: 11/17/2022] Open
Abstract
We propose a locomotion framework for bipedal robots consisting of a new motion planning method, dubbed trajectory optimization for walking robots plus (TOWR+), and a new whole-body control method, dubbed implicit hierarchical whole-body controller (IHWBC). For versatility, we consider the use of a composite rigid body (CRB) model to optimize the robot’s walking behavior. The proposed CRB model considers the floating base dynamics while accounting for the effects of the heavy distal mass of humanoids using a pre-trained centroidal inertia network. TOWR+ leverages the phase-based parameterization of its precursor, TOWR, and optimizes for base and end-effectors motions, feet contact wrenches, as well as contact timing and locations without the need to solve a complementary problem or integer program. The use of IHWBC enforces unilateral contact constraints (i.e., non-slip and non-penetration constraints) and a task hierarchy through the cost function, relaxing contact constraints and providing an implicit hierarchy between tasks. This controller provides additional flexibility and smooth task and contact transitions as applied to our 10 degree-of-freedom, line-feet biped robot DRACO. In addition, we introduce a new open-source and light-weight software architecture, dubbed planning and control (PnC), that implements and combines TOWR+ and IHWBC. PnC provides modularity, versatility, and scalability so that the provided modules can be interchanged with other motion planners and whole-body controllers and tested in an end-to-end manner. In the experimental section, we first analyze the performance of TOWR+ using various bipeds. We then demonstrate balancing behaviors on the DRACO hardware using the proposed IHWBC method. Finally, we integrate TOWR+ and IHWBC and demonstrate step-and-stop behaviors on the DRACO hardware.
Collapse
|
15
|
Human inspired fall arrest strategy for humanoid robots based on stiffness ellipsoid optimisation. BIOINSPIRATION & BIOMIMETICS 2021; 16:056014. [PMID: 34348251 DOI: 10.1088/1748-3190/ac1ab9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 08/04/2021] [Indexed: 06/13/2023]
Abstract
Falls are a common risk and impose severe threats to both humans and humanoid robots as a product of bipedal locomotion. Inspired by human fall arrest, we present a novel humanoid robot fall prevention strategy by using arms to make contact with environmental objects. Firstly, the capture point method is used to detect falling. Once the fall is inevitable, the arm of the robot will be actuated to gain contact with an environmental object to prevent falling. We propose a hypothesis that humans naturally favour to select a pose that can generate a suitable Cartesian stiffness of the arm end-effector. Based on this principle, a configuration optimiser is designed to choose a pose of the arm that maximises the value of the stiffness ellipsoid of the endpoint along the impact force direction. During contact, the upper limb acts as an adjustable active spring-damper and absorbs impact shock to steady itself. To validate the proposed strategy, several simulations are performed in MATLAB & Simulink by having the humanoid robot confront a wall as a case study in which the strategy is proved to be effective and feasible. The results show that using the proposed strategy can reduce the joint torque during impact when the arms are used to arrest the fall.
Collapse
|
16
|
The Humanoid Robot Sil-Bot in a Cognitive Training Program for Community-Dwelling Elderly People with Mild Cognitive Impairment during the COVID-19 Pandemic: A Randomized Controlled Trial. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18158198. [PMID: 34360490 PMCID: PMC8345968 DOI: 10.3390/ijerph18158198] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 07/26/2021] [Accepted: 07/30/2021] [Indexed: 12/14/2022]
Abstract
Background: Mild cognitive impairment (MCI) is a stage preceding dementia, and early intervention is critical. This study investigated whether multi-domain cognitive training programs, especially robot-assisted training, conducted 12 times, twice a week for 6 weeks can improve cognitive function and depression decline in community-dwelling older adults with mild cognitive impairment (MCI). Methods: A randomized controlled trial was conducted with 135 volunteers without cognitive impairment aged 60 years old or older. Participants were first randomized into two groups. One group consisted of 90 participants who would receive cognitive training and 45 who would not receive any training (NI). The cognitive training group was randomly divided into two groups, 45 who received traditional cognitive training (TCT) and 45 who received robot-assisted cognitive training (RACT). The training for both groups consisted of a daily 60 min session, twice a week for six weeks. Results: RACT participants had significantly greater post-intervention improvement in cognitive function (t = 4.707, p < 0.001), memory (t = −2.282, p = 0.007), executive function (t = 4.610, p < 0.001), and depression (t = −3.307, p = 0.004). TCT participants had greater post-intervention improvement in memory (t = −6.671, p < 0.001) and executive function (t = 5.393, p < 0.001). Conclusions: A 6-week robot-assisted, multi-domain cognitive training program can improve the efficiency of global cognitive function and depression during cognitive tasks in older adults with MCI, which is associated with improvements in memory and executive function.
Collapse
|
17
|
Human-Robot Confluence: Toward a Humane Robotics. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2021; 24:291-293. [PMID: 34003012 DOI: 10.1089/cyber.2021.29215.gri] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The new humanoid robots not only perform tasks, but also can activate interactions and social relationships with other robots and with humans. In this view, the diffusion of humanoid robots with a physical structure reminiscent of the human body, endowed with decision-making abilities, and capable of externalizing and generating emotions, is opening a new line of research with a main objective of understanding the dynamics of social interactions generated by the encounters between robots and humans. However, this process is not easy. To be accepted by society, robots have to "understand" people and to adapt themselves to complex real-life social environments. This goal underlines the importance for research of aspects such as communication, acceptance, and ethics that require the collaboration between multiple disciplines, including psychology, neuroscience, design, mechatronics, computer science, philosophy, sociology, anthropology, biomechanics, and roboethics. This special issue seeks to gather knowledge from these disciplines with respect to human-robot confluence (HRC) in the application of robots in everyday life, including robot training partners and industrial collaborative robots (Cobots). It covers a wide range of topics related to HRC, involving theories, methodologies, technologies, and empirical and experimental studies. The final goal is to support researchers and developers in creating robots that not only have a humanoid body but that are really "humane": accessible, sympathetic, generous, compassionate, and forbearing.
Collapse
|
18
|
A Holistic Approach to Human-Supervised Humanoid Robot Operations in Extreme Environments. Front Robot AI 2021; 8:550644. [PMID: 34222345 PMCID: PMC8249801 DOI: 10.3389/frobt.2021.550644] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 05/10/2021] [Indexed: 12/05/2022] Open
Abstract
Nuclear energy will play a critical role in meeting clean energy targets worldwide. However, nuclear environments are dangerous for humans to operate in due to the presence of highly radioactive materials. Robots can help address this issue by allowing remote access to nuclear and other highly hazardous facilities under human supervision to perform inspection and maintenance tasks during normal operations, help with clean-up missions, and aid in decommissioning. This paper presents our research to help realize humanoid robots in supervisory roles in nuclear environments. Our research focuses on National Aeronautics and Space Administration (NASA’s) humanoid robot, Valkyrie, in the areas of constrained manipulation and motion planning, increasing stability using support contact, dynamic non-prehensile manipulation, locomotion on deformable terrains, and human-in-the-loop control interfaces.
Collapse
|
19
|
Omnidirectional Walking Pattern Generator Combining Virtual Constraints and Preview Control for Humanoid Robots. Front Robot AI 2021; 8:660004. [PMID: 34277715 PMCID: PMC8284058 DOI: 10.3389/frobt.2021.660004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 05/11/2021] [Indexed: 11/25/2022] Open
Abstract
This paper presents a novel omnidirectional walking pattern generator for bipedal locomotion combining two structurally different approaches based on the virtual constraints and the preview control theories to generate a flexible gait that can be modified on-line. The proposed strategy synchronizes the displacement of the robot along the two planes of walking: the zero moment point based preview control is responsible for the lateral component of the gait, while the sagittal motion is generated by a more dynamical approach based on virtual constraints. The resulting algorithm is characterized by a low computational complexity and high flexibility, requisite for a successful deployment to humanoid robots operating in real world scenarios. This solution is motivated by observations in biomechanics showing how during a nominal gait the dynamic motion of the human walk is mainly generated along the sagittal plane. We describe the implementation of the algorithm and we detail the strategy chosen to enable omnidirectionality and on-line gait tuning. Finally, we validate our strategy through simulation experiments using the COMAN + platform, an adult size humanoid robot developed at Istituto Italiano di Tecnologia. Finally, the hybrid walking pattern generator is implemented on real hardware, demonstrating promising results: the WPG trajectories results in open-loop stable walking in the absence of external disturbances.
Collapse
|
20
|
Learning to Avoid Obstacles With Minimal Intervention Control. Front Robot AI 2020; 7:60. [PMID: 33501228 PMCID: PMC7806040 DOI: 10.3389/frobt.2020.00060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 04/08/2020] [Indexed: 11/15/2022] Open
Abstract
Programming by demonstration has received much attention as it offers a general framework which allows robots to efficiently acquire novel motor skills from a human teacher. While traditional imitation learning that only focuses on either Cartesian or joint space might become inappropriate in situations where both spaces are equally important (e.g., writing or striking task), hybrid imitation learning of skills in both Cartesian and joint spaces simultaneously has been studied recently. However, an important issue which often arises in dynamical or unstructured environments is overlooked, namely how can a robot avoid obstacles? In this paper, we aim to address the problem of avoiding obstacles in the context of hybrid imitation learning. Specifically, we propose to tackle three subproblems: (i) designing a proper potential field so as to bypass obstacles, (ii) guaranteeing joint limits are respected when adjusting trajectories in the process of avoiding obstacles, and (iii) determining proper control commands for robots such that potential human-robot interaction is safe. By solving the aforementioned subproblems, the robot is capable of generalizing observed skills to new situations featuring obstacles in a feasible and safe manner. The effectiveness of the proposed method is validated through a toy example as well as a real transportation experiment on the iCub humanoid robot.
Collapse
|
21
|
Human Interaction Smart Subsystem-Extending Speech-Based Human-Robot Interaction Systems with an Implementation of External Smart Sensors. SENSORS 2020; 20:s20082376. [PMID: 32331291 PMCID: PMC7219337 DOI: 10.3390/s20082376] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 04/15/2020] [Accepted: 04/19/2020] [Indexed: 12/02/2022]
Abstract
This paper presents a more detailed concept of Human-Robot Interaction systems architecture. One of the main differences between the proposed architecture and other ones is the methodology of information acquisition regarding the robot’s interlocutor. In order to obtain as much information as possible before the actual interaction took place, a custom Internet-of-Things-based sensor subsystems connected to Smart Infrastructure was designed and implemented, in order to support the interlocutor identification and acquisition of initial interaction parameters. The Artificial Intelligence interaction framework of the developed robotic system (including humanoid Pepper with its sensors and actuators, additional local, remote and cloud computing services) is being extended with the use of custom external subsystems for additional knowledge acquisition: device-based human identification, visual identification and audio-based interlocutor localization subsystems. These subsystems were deeply introduced and evaluated in this paper, presenting the benefits of integrating them into the robotic interaction system. In this paper a more detailed analysis of one of the external subsystems—Bluetooth Human Identification Smart Subsystem—was also included. The idea, use case, and a prototype, integration of elements of Smart Infrastructure systems and the prototype implementation were performed in a small front office of the Weegree company as a decent test-bed application area.
Collapse
|
22
|
Educators' Views on Using Humanoid Robots With Autistic Learners in Special Education Settings in England. Front Robot AI 2019; 6:107. [PMID: 33501122 PMCID: PMC7805648 DOI: 10.3389/frobt.2019.00107] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 10/11/2019] [Indexed: 11/23/2022] Open
Abstract
Researchers, industry, and practitioners are increasingly interested in the potential of social robots in education for learners on the autism spectrum. In this study, we conducted semi-structured interviews and focus groups with educators in England to gain their perspectives on the potential use of humanoid robots with autistic pupils, eliciting ideas, and specific examples of potential use. Understanding educator views is essential, because they are key decision-makers for the adoption of robots and would directly facilitate future use with pupils. Educators were provided with several example images (e.g., NAO, KASPAR, Milo), but did not directly interact with robots or receive information on current technical capabilities. The goal was for educators to respond to the general concept of humanoid robots as an educational tool, rather than to focus on the existing uses or behaviour of a particular robot. Thirty-one autism education staff participated, representing a range of special education settings and age groups as well as multiple professional roles (e.g., teachers, teaching assistants, speech, and language therapists). Thematic analysis of the interview transcripts identified four themes: Engagingness of robots, Predictability and consistency, Roles of robots in autism education, and Need for children to interact with people, not robots. Although almost all interviewees were receptive toward using humanoid robots in the classroom, they were not uncritically approving. Rather, they perceived future robot use as likely posing a series of complex cost-benefit trade-offs over time. For example, they felt that a highly motivating, predictable social robot might increase children's readiness to learn in the classroom, but it could also prevent children from engaging fully with other people or activities. Educator views also assumed that skills learned with a robot would generalise, and that robots' predictability is beneficial for autistic children—claims that need further supporting evidence. These interview results offer many points of guidance to the HRI research community about how humanoid robots could meet the specific needs of autistic learners, as well as identifying issues that will need to be resolved for robots to be both acceptable and successfully deployed in special education contexts.
Collapse
|
23
|
Bioinspired Electronics for Artificial Sensory Systems. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2019; 31:e1803637. [PMID: 30345558 DOI: 10.1002/adma.201803637] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 08/01/2018] [Indexed: 05/23/2023]
Abstract
Humans have a myriad of sensory receptors in different sense organs that form the five traditionally recognized senses of sight, hearing, smell, taste, and touch. These receptors detect diverse stimuli originating from the world and turn them into brain-interpretable electrical impulses for sensory cognitive processing, enabling us to communicate and socialize. Developments in biologically inspired electronics have led to the demonstration of a wide range of electronic sensors in all five traditional categories, with the potential to impact a broad spectrum of applications. Here, recent advances in bioinspired electronics that can function as potential artificial sensory systems, including prosthesis and humanoid robots are reviewed. The mechanisms and demonstrations in mimicking biological sensory systems are individually discussed and the remaining future challenges that must be solved for their versatile use are analyzed. Recent progress in bioinspired electronic sensors shows that the five traditional senses are successfully mimicked using novel electronic components and the performance regarding sensitivity, selectivity, and accuracy have improved to levels that outperform human sensory organs. Finally, neural interfacing techniques for connecting artificial sensors to the brain are discussed.
Collapse
|
24
|
Abstract
In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others' behavior with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple's Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance toward an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants' stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behavior of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased toward the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance toward artificial agents, at least in some contexts.
Collapse
|
25
|
Views of nurses and other health and social care workers on the use of assistive humanoid and animal-like robots in health and social care: a scoping review. Contemp Nurse 2018; 54:425-442. [PMID: 30200824 DOI: 10.1080/10376178.2018.1519374] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
BACKGROUND Robots are introduced in many health and social care settings. OBJECTIVES To provide an overview of the existing evidence related to the views of nurses and other health and social care workers about the use of assistive humanoid and animal-like robots. METHODS Using the Joanna Briggs Institute guidelines we searched MEDLINE, PUBMED, CINHAL, EMBASE, PsycInfo, Web of Science, and IEEE Xplore digital library. Nineteen (19) articles met the criteria for inclusion. RESULTS Health care workers reported mixed views regarding the use of robots. They considered an array of tasks that robots could perform; they addressed the issue of patient safety and raised concerns about privacy. CONCLUSIONS A limited number of studies have explored the views of health care workers about the use of robots. Considering the fast pace with which technology is advancing in the care field, it is critical to conduct more research in this area. Impact Statement: Robots will increasingly have a role to play in nursing, health and social care. The potential impact will be challenging for the healthcare workforce. It is therefore important for nurses and other health and social care workers to engage in discussion regarding the contribution of robots and their impact not only on nursing care but also on future roles of health and social care workers.
Collapse
|
26
|
Posture Control-Human-Inspired Approaches for Humanoid Robot Benchmarking: Conceptualizing Tests, Protocols and Analyses. Front Neurorobot 2018; 12:21. [PMID: 29867428 PMCID: PMC5949445 DOI: 10.3389/fnbot.2018.00021] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 04/18/2018] [Indexed: 12/21/2022] Open
Abstract
Posture control is indispensable for both humans and humanoid robots, which becomes especially evident when performing sensorimotor tasks such as moving on compliant terrain or interacting with the environment. Posture control is therefore targeted in recent proposals of robot benchmarking in order to advance their development. This Methods article suggests corresponding robot tests of standing balance, drawing inspirations from the human sensorimotor system and presenting examples from robot experiments. To account for a considerable technical and algorithmic diversity among robots, we focus in our tests on basic posture control mechanisms, which provide humans with an impressive postural versatility and robustness. Specifically, we focus on the mechanically challenging balancing of the whole body above the feet in the sagittal plane around the ankle joints in concert with the upper body balancing around the hip joints. The suggested tests target three key issues of human balancing, which appear equally relevant for humanoid bipeds: (1) four basic physical disturbances (support surface (SS) tilt and translation, field and contact forces) may affect the balancing in any given degree of freedom (DoF). Targeting these disturbances allows us to abstract from the manifold of possible behavioral tasks. (2) Posture control interacts in a conflict-free way with the control of voluntary movements for undisturbed movement execution, both with "reactive" balancing of external disturbances and "proactive" balancing of self-produced disturbances from the voluntary movements. Our proposals therefore target both types of disturbances and their superposition. (3) Relevant for both versatility and robustness of the control, linkages between the posture control mechanisms across DoFs provide their functional cooperation and coordination at will and on functional demands. The suggested tests therefore include ankle-hip coordination. Suggested benchmarking criteria build on the evoked sway magnitude, normalized to robot weight and Center of mass (COM) height, in relation to reference ranges that remain to be established. The references may include human likeness features. The proposed benchmarking concept may in principle also be applied to wearable robots, where a human user may command movements, but may not be aware of the additionally required postural control, which then needs to be implemented into the robot.
Collapse
|
27
|
Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots. Front Psychol 2017; 8:1366. [PMID: 28912736 PMCID: PMC5582422 DOI: 10.3389/fpsyg.2017.01366] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 07/27/2017] [Indexed: 11/17/2022] Open
Abstract
Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.
Collapse
|
28
|
Humanoid assessing rehabilitative exercises. Methods Inf Med 2016; 54:114-21. [PMID: 24986076 DOI: 10.3414/me13-02-0054] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2013] [Accepted: 03/13/2014] [Indexed: 11/09/2022]
Abstract
INTRODUCTION This article is part of the Focus Theme of Methods of Information in Medicine on "New Methodologies for Patients Rehabilitation". BACKGROUND The article presents the approach in which the rehabilitative exercise prepared by healthcare professional is encoded as formal knowledge and used by humanoid robot to assist patients without involving other care actors. OBJECTIVES The main objective is the use of humanoids in rehabilitative care. An example is pulmonary rehabilitation in COPD patients. Another goal is the automated judgment functionality to determine how the rehabilitation exercise matches the pre-programmed correct sequence. METHODS We use the Aldebaran Robotics' NAO humanoid to set up artificial cognitive application. Pre-programmed NAO induces elderly patient to undertake humanoid-driven rehabilitation exercise, but needs to evaluate the human actions against the correct template. Patient is observed using NAO's eyes. We use the Microsoft Kinect SDK to extract motion path from the humanoid's recorded video. We compare human- and humanoid-operated process sequences by using the Dynamic Time Warping (DTW) and test the prototype. RESULTS This artificial cognitive software showcases the use of DTW algorithm to enable humanoids to judge in near real-time about the correctness of rehabilitative exercises performed by patients following the robot's indications. CONCLUSION One could enable better sustainable rehabilitative care services in remote residential settings by combining intelligent applications piloting humanoids with the DTW pattern matching algorithm applied at run time to compare humanoid- and human-operated process sequences. In turn, it will lower the need of human care.
Collapse
|
29
|
Embodied artificial agents for understanding human social cognition. Philos Trans R Soc Lond B Biol Sci 2016; 371:20150375. [PMID: 27069052 PMCID: PMC4843613 DOI: 10.1098/rstb.2015.0375] [Citation(s) in RCA: 88] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2016] [Indexed: 01/05/2023] Open
Abstract
In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum 'What is a social agent?'
Collapse
|
30
|
Autism and social robotics: A systematic review. Autism Res 2015; 9:165-83. [PMID: 26483270 DOI: 10.1002/aur.1527] [Citation(s) in RCA: 134] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 07/17/2015] [Indexed: 11/11/2022]
Abstract
Social robotics could be a promising method for Autism Spectrum Disorders (ASD) treatment. The aim of this article is to carry out a systematic literature review of the studies on this topic that were published in the last 10 years. We tried to address the following questions: can social robots be a useful tool in autism therapy? We followed the PRISMA guidelines, and the protocol was registered within PROSPERO database (CRD42015016158). We found many positive implications in the use of social robots in therapy as for example: ASD subjects often performed better with a robot partner rather than a human partner; sometimes, ASD patients had, toward robots, behaviors that TD patients had toward human agents; ASDs had a lot of social behaviors toward robots; during robotic sessions, ASDs showed reduced repetitive and stereotyped behaviors and, social robots manage to improve spontaneous language during therapy sessions. Therefore, robots provide therapists and researchers a means to connect with autistic subjects in an easier way, but studies in this area are still insufficient. It is necessary to clarify whether sex, intelligence quotient, and age of participants affect the outcome of therapy and whether any beneficial effects only occur during the robotic session or if they are still observable outside the clinical/experimental context.
Collapse
|
31
|
Robot Comedy Lab: experimenting with the social dynamics of live performance. Front Psychol 2015; 6:1253. [PMID: 26379585 PMCID: PMC4548079 DOI: 10.3389/fpsyg.2015.01253] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2015] [Accepted: 08/05/2015] [Indexed: 11/27/2022] Open
Abstract
The success of live comedy depends on a performer's ability to “work” an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHORE™computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and performative gestures also contribute to different patterns of audience response. This demonstrates how the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging forms of robot interaction that can be used in a variety of service contexts.
Collapse
|
32
|
Passive motion paradigm: an alternative to optimal control. Front Neurorobot 2011; 5:4. [PMID: 22207846 PMCID: PMC3246361 DOI: 10.3389/fnbot.2011.00004] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2011] [Accepted: 11/29/2011] [Indexed: 11/25/2022] Open
Abstract
IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
Collapse
|