1
|
Xu F, Xia Y, Wu X. An adaptive control framework based multi-modal information-driven dance composition model for musical robots. Front Neurorobot 2023; 17:1270652. [PMID: 37876550 PMCID: PMC10590936 DOI: 10.3389/fnbot.2023.1270652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 08/31/2023] [Indexed: 10/26/2023] Open
Abstract
Currently, most robot dances are pre-compiled, the requirement of manual adjustment of relevant parameters and meta-action to change the dancing to another type of music would greatly reduce its function. To overcome the gap, this study proposed a dance composition model for mobile robots based on multimodal information. The model consists of three parts. (1) Extraction of multimodal information. The temporal structure feature method of structure analysis framework is used to divide audio music files into music structures; then, a hierarchical emotion detection framework is used to extract information (rhythm, emotion, tension, etc.) for each segmented music structure; calculating the safety of the current car and surrounding objects in motion; finally, extracting the stage color of the robot's location, corresponding to the relevant atmosphere emotions. (2) Initialize the dance library. Dance composition is divided into four categories based on the classification of music emotions; in addition, each type of dance composition is divided into skilled composition and general dance composition. (3) The total path length can be obtained by combining multimodal information based on different emotions, initial speeds, and music structure periods; then, target point planning can be carried out based on the specific dance composition selected. An adaptive control framework based on the Cerebellar Model Articulation Controller (CMAC) and compensation controllers is used to track the target point trajectory, and finally, the selected dance composition is formed. Mobile robot dance composition provides a new method and concept for humanoid robot dance composition.
Collapse
Affiliation(s)
- Fumei Xu
- School of Music, Jiangxi Normal University, Nanchang, Jiangxi, China
| | - Yu Xia
- School of Aviation Services and Music, Nanchang Hangkong University, Nanchang, Jiangxi, China
| | - Xiaorun Wu
- School of Information Engineering, Nanchang Hangkong University, Nanchang, Jiangxi, China
| |
Collapse
|
2
|
Automatic Aesthetics Evaluation of Robotic Dance Poses Based on Hierarchical Processing Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5827097. [PMID: 36156961 PMCID: PMC9507690 DOI: 10.1155/2022/5827097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 08/28/2022] [Accepted: 09/03/2022] [Indexed: 11/18/2022]
Abstract
Vision plays an important role in the aesthetic cognition of human beings. When creating dance choreography, human dancers, who always observe their own dance poses in a mirror, understand the aesthetics of those poses and aim to improve their dancing performance. In order to develop artificial intelligence, a robot should establish a similar mechanism to imitate the above human dance behaviour. Inspired by this, this paper designs a way for a robot to visually perceive its own dance poses and constructs a novel dataset of dance poses based on real NAO robots. On this basis, this paper proposes a hierarchical processing network-based approach to automatic aesthetics evaluation of robotic dance poses. The hierarchical processing network first extracts the primary visual features by using three parallel CNNs, then uses a synthesis CNN to achieve high-level association and comprehensive processing on the basis of multi-modal feature fusion, and finally makes an automatic aesthetics decision. Notably, the design of this hierarchical processing network is inspired by the research findings in neuroaesthetics. Experimental results show that our approach can achieve a high correct ratio of aesthetic evaluation at 82.3%, which is superior to the existing methods.
Collapse
|
3
|
Multiple Visual Feature Integration Based Automatic Aesthetics Evaluation of Robotic Dance Motions. INFORMATION 2021. [DOI: 10.3390/info12030095] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Imitation of human behaviors is one of the effective ways to develop artificial intelligence. Human dancers, standing in front of a mirror, always achieve autonomous aesthetics evaluation on their own dance motions, which are observed from the mirror. Meanwhile, in the visual aesthetics cognition of human brains, space and shape are two important visual elements perceived from motions. Inspired by the above facts, this paper proposes a novel mechanism of automatic aesthetics evaluation of robotic dance motions based on multiple visual feature integration. In the mechanism, a video of robotic dance motion is firstly converted into several kinds of motion history images, and then a spatial feature (ripple space coding) and shape features (Zernike moment and curvature-based Fourier descriptors) are extracted from the optimized motion history images. Based on feature integration, a homogeneous ensemble classifier, which uses three different random forests, is deployed to build a machine aesthetics model, aiming to make the machine possess human aesthetic ability. The feasibility of the proposed mechanism has been verified by simulation experiments, and the experimental results show that our ensemble classifier can achieve a high correct ratio of aesthetics evaluation of 75%. The performance of our mechanism is superior to those of the existing approaches.
Collapse
|
4
|
Santos M, Egerstedt M. From Motions to Emotions: Can the Fundamental Emotions be Expressed in a Robot Swarm? Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00665-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
5
|
Multimodal Information Fusion for Automatic Aesthetics Evaluation of Robotic Dance Poses. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00535-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
6
|
Abstract
In general, human dance is created by the imagination and innovativeness of human dancers, which in turn provides an inspiration for robotic choreography generation. This paper proposes a novel mechanism for a humanoid robot to create good choreography autonomously with the imagination of human dance. Such a mechanism combines innovativeness with the characteristic preservation of human dance, and enables a humanoid robot to present the characteristics of “imitation, memory, imagination, process and combination”. The proposed mechanism has been implemented on a real humanoid robot, NAO, to verify its feasibility and performance. Experimental results are presented to demonstrate good performance of the proposed mechanism.
Collapse
|
7
|
Abstract
Robot dance is an important topic in robotics. Conventional robot dance systems mainly rely on beats or rhythms of music; however, these conventional systems suffer from limited dance styles and less action novelty. In this paper, we instead develop a humanoid robot dance system driven by musical structures and emotions. In the proposed system, a musical phrase and a dance phrase are considered as the basic structural units of music and dance, respectively. A musical phrasing algorithm based on music theories is created to divide a piece of music into a sequence of phrases. When the emotion of each phrase has been recognized, an emotion sequence can be established. Meanwhile, a hidden Markov model (HMM) matches a dance phrase sequence to the emotion sequence. In particular, several concepts of the “chance method” created by choreographer Merce Cunningham are adopted to guide our robot dance system; thus, a dance phrase is choreographed by randomly selecting and combining a number of actions from a predesigned action library. Based on the approach, one music can generate diverse robotic dance motions, showing the novelty and diversity of robot dance. The experiments on our humanoid robot “Alpha1 Pro” show that our robot can do a good job dancing to music according to musical structures and emotions and can be well accepted by various people.
Collapse
Affiliation(s)
- Ruilin Qin
- Cognitive Science Department, Fujian Provincial Key Laboratory of Brain-inspired Computing, School of Information Science and Engineering, Xiamen University, Xiamen 361005, P. R. China
| | - Changle Zhou
- Cognitive Science Department, Fujian Provincial Key Laboratory of Brain-inspired Computing, School of Information Science and Engineering, Xiamen University, Xiamen 361005, P. R. China
| | - He Zhu
- Cognitive Science Department, Fujian Provincial Key Laboratory of Brain-inspired Computing, School of Information Science and Engineering, Xiamen University, Xiamen 361005, P. R. China
| | - Minghui Shi
- Cognitive Science Department, Fujian Provincial Key Laboratory of Brain-inspired Computing, School of Information Science and Engineering, Xiamen University, Xiamen 361005, P. R. China
| | - Fei Chao
- Cognitive Science Department, Fujian Provincial Key Laboratory of Brain-inspired Computing, School of Information Science and Engineering, Xiamen University, Xiamen 361005, P. R. China
| | - Na Li
- Dance Studio, Department of Music, Art College, Xiamen University, Xiamen 361005, P. R. China
| |
Collapse
|
8
|
Kulic D, Venture G, Yamane K, Demircan E, Mizuuchi I, Mombaur K. Anthropomorphic Movement Analysis and Synthesis: A Survey of Methods and Applications. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2587744] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
9
|
Abstract
In this paper, a motion editing tool to create dancing motions of a humanoid robot is proposed. In order to build performances or dancing of a humanoid robot, a motion editing tool to create specific motions is necessary. Especially, to generate more natural motions is required for a dancing robot. We proposed a motion editing tool and algorithm to create the natural motions. The proposed motion editing tool can create robot's motions composed of several steps which are captured from every joint while the robot plays. The motion editing tool generates the continuous motion interpolated between each steps. A humanoid robot of 50 cm tall is developed to test the proposed tool. The robot using the motion editing tool was demonstrated the natural dancing performance.
Collapse
Affiliation(s)
- Dae-Young Lim
- Automative Components and Meterials R&BD Group, Korea Institute of Industrial Technology, 6 Cheomdan-gwagiro, Buk-gu, Gwangju, 500-480, Korea
| | - Hyun-Jin Kwak
- Department of Control Engineering and Robotics, Mokpo National University, 1666, Youngsan-ro, Cheonggye-Myeon, Muan-gun, Jeonnam 534-729, South Korea
| | - Young-Jae Ryoo
- Department of Control Engineering and Robotics, Mokpo National University, 1666, Youngsan-ro, Cheonggye-Myeon, Muan-gun, Jeonnam 534-729, South Korea
| |
Collapse
|
10
|
Kakehashi Y, Izawa T, Shirai T, Nakanishi Y, Okada K, Inaba M. Achievement of Hula Hooping by Robots Through Deriving Principle Structure Towards Flexible Spinal Motion. JOURNAL OF ROBOTICS AND MECHATRONICS 2012. [DOI: 10.20965/jrm.2012.p0540] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Dance is the art of physical expression. To enhance the expression of robots’ motions and foster sophisticated communication, we aspire to enable robots to dance. Flexible motion of the torso is presumed to be important in dancing, so we picked hula hooping as an accessible example. In this paper, we lay out a simple model of hula hooping that depicts both the torso and hoop, then consider applicability using dedicated robots to show that the model can enable real robots to succeed in hula hooping.
Collapse
|